Search results for: statistical modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7658

Search results for: statistical modeling

1628 Quality Assurance Comparison of Map Check 2, Epid, and Gafchromic® EBT3 Film for IMRT Treatment Planning

Authors: Khalid Iqbal, Saima Altaf, M. Akram, Muhammad Abdur Rafaye, Saeed Ahmad Buzdar

Abstract:

Objective: Verification of patient-specific intensity modulated radiation therapy (IMRT) plans using different 2-D detectors has become increasingly popular due to their ease of use and immediate readout of the results. The purpose of this study was to test and compare various 2-D detectors for dosimetric quality assurance (QA) of intensity-modulated radiotherapy (IMRT) with the vision to find alternative QA methods. Material and Methods: Twenty IMRT patients (12 of brain and 8 of the prostate) were planned on Eclipse treatment planning system using Varian Clinac DHX on both energies 6MV and 15MV. Verification plans of all such patients were also made and delivered to Map check2, EPID (Electronic portal imaging device) and Gafchromic EBT3. Gamma index analyses were performed using different criteria to evaluate and compare the dosimetric results. Results: Statistical analysis shows the passing rate of 99.55%, 97.23% and 92.9% for 6MV and 99.53%, 98.3% and 94.85% for 15 MV energy using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm respectively for brain, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria, the passing rate is 94.55% and 90.45% for 6MV and 95.25%9 and 95% for 15 MV energy for the case of prostate using EBT3 film. Map check 2 results shows the passing rates of 98.17%, 97.68% and 86.78% for 6MV energy and 94.87%,97.46% and 88.31% for 15 MV energy respectively for brain using a criteria of ±5% of 3mm, ±3% of 3mm and ±3% of 2mm, whereas using ±5% of 3mm and ±3% of 3mm gamma evaluation criteria gives the passing rate of 97.7% and 96.4% for 6MV and 98.75%9 and 98.05% for 15 MV energy for the case of prostate. EPID 6 MV and gamma analysis shows the passing rate of 99.56%, 98.63% and 98.4% for the brain, 100% and 99.9% for prostate using the same criteria as for map check 2 and EBT 3 film. Conclusion: The results demonstrate excellent passing rates were obtained for all dosimeter when compared with the planar dose distributions for 6 MV IMRT fields as well as for 15 MV. EPID results are better than EBT3 films and map check 2 because it is likely that part of this difference is real, and part is due to manhandling and different treatment set up verification which contributes dose distribution difference. Overall all three dosimeter exhibits results within limits according to AAPM report.120.

Keywords: gafchromic EBT3, radiochromic film dosimetry, IMRT verification, EPID

Procedia PDF Downloads 421
1627 The Effect of Artificial Intelligence on Digital Factory

Authors: Sherif Fayez Lewis Ghaly

Abstract:

up to datefacupupdated planning has the mission of designing merchandise, plant life, procedures, enterprise, regions, and the development of a up to date. The requirements for up-to-date planning and the constructing of a updated have changed in recent years. everyday restructuring is turning inupupdated greater essential up-to-date hold the competitiveness of a manufacturing facilityupdated. restrictions in new regions, shorter existence cycles of product and manufacturing generation up-to-date a VUCA global (Volatility, Uncertainty, Complexity & Ambiguity) up-to-date greater frequent restructuring measures inside a manufacturing facilityupdated. A virtual up-to-date model is the making plans basis for rebuilding measures and up-to-date an fundamental up-to-date. short-time period rescheduling can now not be handled through on-web site inspections and manual measurements. The tight time schedules require 3177227fc5dac36e3e5ae6cd5820dcaa making plans fashions. updated the high variation fee of facup-to-dateries defined above, a method for rescheduling facupdatedries on the idea of a modern-day digital up to datery dual is conceived and designed for sensible software in updated restructuring projects. the point of interest is on rebuild approaches. The purpose is up-to-date preserve the planning basis (virtual up-to-date model) for conversions within a up to datefacupupdated updated. This calls for the application of a methodology that reduces the deficits of present techniques. The goal is up-to-date how a digital up to datery version may be up to date up to date during ongoing up to date operation. a method up-to-date on phoup to dategrammetry technology is presented. the focus is on developing a easy and fee-powerful up to date tune the numerous adjustments that arise in a manufacturing unit constructing in the course of operation. The method is preceded with the aid of a hardware and software assessment up-to-date become aware of the most cost effective and quickest version.

Keywords: building information modeling, digital factory model, factory planning, maintenance digital factory model, photogrammetry, restructuring

Procedia PDF Downloads 28
1626 Application of Sentinel-2 Data to Evaluate the Role of Mangrove Conservation and Restoration on Aboveground Biomass

Authors: Raheleh Farzanmanesh, Christopher J. Weston

Abstract:

Mangroves are forest ecosystems located in the inter-tidal regions of tropical and subtropical coastlines that provide many valuable economic and ecological benefits for millions of people, such as preventing coastal erosion, providing breeding, and feeding grounds, improving water quality, and supporting the well-being of local communities. In addition, mangroves capture and store high amounts of carbon in biomass and soils that play an important role in combating climate change. The decline in mangrove area has prompted government and private sector interest in mangrove conservation and restoration projects to achieve multiple Sustainable Development Goals, from reducing poverty to improving life on land. Mangrove aboveground biomass plays an essential role in the global carbon cycle, climate change mitigation and adaptation by reducing CO2 emissions. However, little information is available about the effectiveness of mangrove sustainable management on mangrove change area and aboveground biomass (AGB). Here, we proposed a method for mapping, modeling, and assessing mangrove area and AGB in two Global Environment Facility (GEF) blue forests projects based on Sentinel-2 Level 1C imagery during their conservation lifetime. The SVR regression model was used to estimate AGB in Tahiry Honko project in Madagascar and the Abu Dhabi Blue Carbon Demonstration Project (Abu Dhabi Emirates. The results showed that mangrove forests and AGB declined in the Tahiry Honko project, while in the Abu Dhabi project increased after the conservation initiative was established. The results provide important information on the impact of mangrove conservation activities and contribute to the development of remote sensing applications for mapping and assessing mangrove forests in blue carbon initiatives.

Keywords: blue carbon, mangrove forest, REDD+, aboveground biomass, Sentinel-2

Procedia PDF Downloads 73
1625 T Cell Immunity Profile in Pediatric Obesity and Asthma

Authors: Mustafa M. Donma, Erkut Karasu, Burcu Ozdilek, Burhan Turgut, Birol Topcu, Burcin Nalbantoglu, Orkide Donma

Abstract:

The mechanisms underlying the association between obesity and asthma may be related to a decreased immunological tolerance induced by a defective function of regulatory T cells (Tregs). The aim of this study is to establish the potential link between these diseases and CD4+, CD25+ FoxP3+ Tregs as well as T helper cells (Ths) in children. This is a prospective case control study. Obese (n:40), asthmatic (n:40), asthmatic obese (n:40), and healthy children (n:40), who don't have any acute or chronic diseases, were included in this study. Obese children were evaluated according to WHO criteria. Asthmatic patients were chosen based on GINA criteria. Parents were asked to fill up the questionnaire. Informed consent forms were taken. Blood samples were marked with CD4+, CD25+ and FoxP3+ in order to determine Tregs and Ths by flow cytometric method. Statistical analyses were performed. p≤0.05 was chosen as meaningful threshold. Tregs exhibiting anti-inflammatory nature were significantly lower in obese (0,16%; p≤0,001), asthmatic (0,25%; p≤0,01) and asthmatic obese (0,29%; p≤0,05) groups than the control group (0,38%). Ths were counted higher in asthma group than the control (p≤0,01) and obese (p≤0,001)) groups. T cell immunity plays important roles in obesity and asthma pathogeneses. Decreased numbers of Tregs found in obese, asthmatic and asthmatic obese children may help to elucidate some questions in pathophysiology of these diseases. For HOMA-IR levels, any significant difference was not noted between control and obese groups, but statistically higher values were found for obese asthmatics. The values obtained in all groups were found to be below the critical cut off points. This finding has made the statistically significant difference observed between Tregs of obese, asthmatic, obese asthmatic, and control groups much more valuable. These findings will be useful in diagnosis and treatment of these disorders and future studies are needed. The production and propagation of Tregs may be promising in alternative asthma and obesity treatments.

Keywords: asthma, flow cytometry, pediatric obesity, T cells

Procedia PDF Downloads 346
1624 System Identification of Building Structures with Continuous Modeling

Authors: Ruichong Zhang, Fadi Sawaged, Lotfi Gargab

Abstract:

This paper introduces a wave-based approach for system identification of high-rise building structures with a pair of seismic recordings, which can be used to evaluate structural integrity and detect damage in post-earthquake structural condition assessment. The fundamental of the approach is based on wave features of generalized impulse and frequency response functions (GIRF and GFRF), i.e., wave responses at one structural location to an impulsive motion at another reference location in time and frequency domains respectively. With a pair of seismic recordings at the two locations, GFRF is obtainable as Fourier spectral ratio of the two recordings, and GIRF is then found with the inverse Fourier transformation of GFRF. With an appropriate continuous model for the structure, a closed-form solution of GFRF, and subsequent GIRF, can also be found in terms of wave transmission and reflection coefficients, which are related to structural physical properties above the impulse location. Matching the two sets of GFRF and/or GIRF from recordings and the model helps identify structural parameters such as wave velocity or shear modulus. For illustration, this study examines ten-story Millikan Library in Pasadena, California with recordings of Yorba Linda earthquake of September 3, 2002. The building is modelled as piecewise continuous layers, with which GFRF is derived as function of such building parameters as impedance, cross-sectional area, and damping. GIRF can then be found in closed form for some special cases and numerically in general. Not only does this study reveal the influential factors of building parameters in wave features of GIRF and GRFR, it also shows some system-identification results, which are consistent with other vibration- and wave-based results. Finally, this paper discusses the effectiveness of the proposed model in system identification.

Keywords: wave-based approach, seismic responses of buildings, wave propagation in structures, construction

Procedia PDF Downloads 233
1623 Revealing the Risks of Obstructive Sleep Apnea

Authors: Oyuntsetseg Sandag, Lkhagvadorj Khosbayar, Naidansuren Tsendeekhuu, Densenbal Dansran, Bandi Solongo

Abstract:

Introduction: Obstructive sleep apnea (OSA) is a common disorder affecting at least 2% to 4% of the adult population. It is estimated that nearly 80% of men and 93% of women with moderate to severe sleep apnea are undiagnosed. A number of screening questionnaires and clinical screening models have been developed to help identify patients with OSA, also it’s indeed to clinical practice. Purpose of study: Determine dependence of obstructive sleep apnea between for severe risk and risk factor. Material and Methods: A cross-sectional study included 114 patients presenting from theCentral state 3th hospital and Central state 1th hospital. Patients who had obstructive sleep apnea (OSA)selected in this study. Standard StopBang questionnaire was obtained from all patients.According to the patients’ response to the StopBang questionnaire was divided into low risk, intermediate risk, and high risk.Descriptive statistics were presented mean ± standard deviation (SD). Each questionnaire was compared on the likelihood ratio for a positive result, the likelihood ratio for a negative test result of regression. Statistical analyses were performed utilizing SPSS 16. Results: 114 patients were obtained (mean age 48 ± 16, male 57)that divided to low risk 54 (47.4%), intermediate risk 33 (28.9%), high risk 27 (23.7%). Result of risk factor showed significantly increasing that mean age (38 ± 13vs. 54 ± 14 vs. 59 ± 10, p<0.05), blood pressure (115 ± 18vs. 133 ± 19vs. 142 ± 21, p<0.05), BMI(24 IQR 22; 26 vs. 24 IQR 22; 29 vs. 28 IQR 25; 34, p<0.001), neck circumference (35 ± 3.4 vs. 38 ± 4.7 vs. 41 ± 4.4, p<0.05)were increased. Results from multiple logistic regressions showed that age is significantly independently factor for OSA (odds ratio 1.07, 95% CI 1.02-1.23, p<0.01). Predictive value of age was significantly higher factor for OSA (AUC=0.833, 95% CI 0.758-0.909, p<0.001). Our study showing that risk of OSA is beginning 47 years old (sensitivity 78.3%, specifity74.1%). Conclusions: According to most of all patients’ response had intermediate risk and high risk. Also, age, blood pressure, neck circumference and BMI were increased such as risk factor was increased for OSA. Especially age is independently factor and highest significance for OSA. Patients’ age one year is increased likelihood risk factor 1.1 times is increased.

Keywords: obstructive sleep apnea, Stop-Bang, BMI (Body Mass Index), blood pressure

Procedia PDF Downloads 310
1622 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare

Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl

Abstract:

Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.

Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval

Procedia PDF Downloads 201
1621 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 120
1620 Digital Transformation in Education: Artificial Intelligence Awareness of Preschool Teachers

Authors: Cansu Bozer, Saadet İrem Turgut

Abstract:

Artificial intelligence (AI) has become one of the most important technologies of the digital age and is transforming many sectors, including education. The advantages offered by AI, such as automation, personalised learning, and data analytics, create new opportunities for both teachers and students in education systems. Preschool education plays a fundamental role in the cognitive, social, and emotional development of children. In this period, the foundations of children's creative thinking, problem-solving, and critical thinking skills are laid. Educational technologies, especially artificial intelligence-based applications, are thought to contribute to the development of these skills. For example, artificial intelligence-supported digital learning tools can support learning processes by offering activities that can be customised according to the individual needs of each child. However, the successful use of artificial intelligence-based applications in preschool education can be realised under the guidance of teachers who have the right knowledge about this technology. Therefore, it is of great importance to measure preschool teachers' awareness levels of artificial intelligence and to understand which variables affect this awareness. The aim of this study is to measure preschool teachers' awareness levels of artificial intelligence and to determine which factors are related to this awareness. In line with this purpose, teachers' level of knowledge about artificial intelligence, their thoughts about the role of artificial intelligence in education, and their attitudes towards artificial intelligence will be evaluated. The study will be conducted with 100 teachers working in Turkey using a descriptive survey model. In this context, ‘Artificial Intelligence Awareness Level Scale for Teachers’ developed by Ferikoğlu and Akgün (2022) will be used. The collected data will be analysed using SPSS (Statistical Package for the Social Sciences) software. Descriptive statistics (frequency, percentage, mean, standard deviation) and relationship analyses (correlation and regression analyses) will be used in data analysis. As a result of the study, the level of artificial intelligence awareness of preschool teachers will be determined, and the factors affecting this awareness will be identified. The findings obtained will contribute to the determination of studies that can be done to increase artificial intelligence awareness in preschool education.

Keywords: education, child development, artificial intelligence, preschool teachers

Procedia PDF Downloads 19
1619 Translanguaging and Cross-languages Analyses in Writing and Oral Production with Multilinguals: a Systematic Review

Authors: Maryvone Cunha de Morais, Lilian Cristine Hübner

Abstract:

Based on a translanguaging theoretical approach, which considers language not as separate entities but as an entire repertoire available to bilingual individuals, this systematic review aimed at analyzing the methods (aims, samples investigated, type of stimuli, and analyses) adopted by studies on translanguaging practices associated with written and oral tasks (separately or integrated) in bilingual education. The PRISMA criteria for systematic reviews were adopted, with the descriptors "translanguaging", "bilingual education" and/or “written and oral tasks" to search in Pubmed/Medline, Lilacs, Eric, Scopus, PsycINFO, and Web of Science databases for articles published between 2017 and 2021. 280 registers were found, and after following the inclusion/exclusion criteria, 24 articles were considered for this analysis. The results showed that translanguaging practices were investigated on four studies focused on written production analyses, ten focused on oral production analysis, whereas ten studies focused on both written and oral production analyses. The majority of the studies followed a qualitative approach, while five studies have attempted to study translanguaging with quantitative statistical measures. Several types of methods were used to investigate translanguaging practices in written and oral production, with different approaches and tools indicating that the methods are still in development. Moreover, the findings showed that students’ interactions have received significant attention, and studies have been developed not just in language classes in bilingual education, but also including diverse educational and theoretical contexts such as Content and Language Integrated Learning, task repetition, Science classes, collaborative writing, storytelling, peer feedback, Speech Act theory and collective thinking, language ideologies, conversational analysis, and discourse analyses. The studies, whether focused either on writing or oral tasks or in both, have portrayed significant research and pedagogical implications, grounded on the view of integrated languages in bi-and multilinguals.

Keywords: bilingual education, oral production, translanguaging, written production

Procedia PDF Downloads 126
1618 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 127
1617 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 280
1616 Effectiveness of Technology Enhanced Learning in Orthodontic Teaching

Authors: Mohammed Shaath

Abstract:

Aims Technological advancements in teaching and learning have made significant improvements over the past decade and have been incorporated in institutions to aid the learner’s experience. This review aims to assess whether Technology Enhanced Learning (TEL) pedagogy is more effective at improving students’ attitude and knowledge retention in orthodontic training than traditional methods. Methodology The searches comprised Systematic Reviews (SRs) related to the comparison of TEL and traditional teaching methods from the following databases: PubMed, SCOPUS, Medline, and Embase. One researcher performed the screening, data extraction, and analysis and assessed the risk of bias and quality using A Measurement Tool to Assess Systematic Reviews 2 (AMSTAR-2). Kirkpatrick’s 4-level evaluation model was used to evaluate the educational values. Results A sum of 34 SRs was identified after the removal of duplications and irrelevant SRs; 4 fit the inclusion criteria. On Level 1, students showed positivity to TEL methods, although acknowledging that the harder the platforms to use, the less favourable. Nonetheless, the students still showed high levels of acceptability. Level 2 showed there is no significant overall advantage of increased knowledge when it comes to TEL methods. One SR showed that certain aspects of study within orthodontics deliver a statistical improvement with TEL. Level 3 was the least reported on. Results showed that if left without time restrictions, TEL methods may be advantageous. Level 4 shows that both methods are equally as effective, but TEL has the potential to overtake traditional methods in the future as a form of active, student-centered approach. Conclusion TEL has a high level of acceptability and potential to improve learning in orthodontics. Current reviews have potential to be improved, but the biggest aspect that needs to be addressed is the primary study, which shows a lower level of evidence and heterogeneity in their results. As it stands, the replacement of traditional methods with TEL cannot be fully supported in an evidence-based manner. The potential of TEL methods has been recognized and is already starting to show some evidence of the ability to be more effective in some aspects of learning to cater for a more technology savvy generation.

Keywords: TEL, orthodontic, teaching, traditional

Procedia PDF Downloads 42
1615 Use of Information and Communication Technologies in Enhancing Health Care Delivery for Human Immunodeficiency Virus Patients in Bamenda Health District

Authors: Abanda Wilfred Chick

Abstract:

Background: According to World Health Organization (WHO), the role of Information and Communication Technologies (ICT) in health sectors of developing nations has been demonstrated to have had a great improvement of fifty percent reduction in mortality and or twenty-five-fifty percent increase in productivity. The objective of this study was to assess the use of information and communication technologies in enhancing health care delivery for Human Immunodeficiency Virus (HIV) patients in Bamenda Health District. Methods: This was a descriptive-analytical cross-sectional study in which 388 participants were consecutively selected amongst health personnel and HIV patients from public and private health institutions involved in Human Immunodeficiency Virus management. Data on socio-demographic variables, the use of information and communication technologies tools, and associated challenges were collected using structured questionnaires. Descriptive statistics with a ninety-five percent confidence interval were used to summarize findings, while Cramer’s V test, logistic regression, and Chi-square test were used to measure the association between variables, Epi info version7.2, MS Excel, and SPSS version 25.0 were utilized for data entry and statistical analysis respectively. Results: Of the participants, one-quarter were health personnel, and three-quarters were HIV patients. For both groups of participants, there was a significant relationship between the use of ICT and demographic information such as level of education, marital status, and age (p<0.05). For the impediments to using ICT tools, a greater proportion identified the high cost of airtime or internet bundles, followed by an average proportion that indicated inadequate training on ICT tools; for health personnel, the majority said inadequate training on ICT tools/applications and half said unavailability of electricity. Conclusion: Not up to half of the HIV patients effectively make use of ICT tools/applications to receive health care. Of health personnel, three quarters use ICTs, and only one quarter effectively use mobile phones and one-third of computers, respectively, to render care to HIV patients.

Keywords: ICT tools, HIV patients, health personnel, health care delivery

Procedia PDF Downloads 84
1614 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator

Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty

Abstract:

Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.

Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state

Procedia PDF Downloads 266
1613 The Comparison of Dismount Skill between National and International Men’s Artistic Gymnastics in Parallel Bars Apparatus

Authors: Chen ChihYu, Tang Wen Tzu, Chen Kuang Hui

Abstract:

Aim —To compare the dismount skill between Taiwanese and elite international gymnastics in parallel bars following the 2017-2020 code of points. Methods—The gymnasts who advanced to the parallel bars event finals of these four competitions including World Championships, Universiade, the National Games of Taiwan, and the National Intercollegiate Athletic Games of Taiwan both 2017 and 2019 were selected in this study. The dismount skill of parallel bars was analyzed, and the average difficulty score was compared by one-way ANOVA. Descriptive statistics were applied to present the type of dismount skill and the difficulty of each gymnast in these four competitions. The data from World Championships and Universiade were combined as the international group (INT), and data of Taiwanese National Games and National Intercollegiate Athletic Games were also combined as the national group (NAT). The differences between INT and NAT were analyzed by the Chi-square test. The statistical significance of this study was set at α= 0.05. Results— i) There was a significant difference in the mean parallel bars dismount skill in these four competitions analyzed by one-way ANOVA. Both dismount scores of World Championships and Universiade were significantly higher than in Taiwanese National Games and National Intercollegiate Athletic Games (0.58±0.08 & 0.56±0.08 > 0.42±0.06 & 40±0.06, p < 0.05). ii) Most of the gymnasts in World Championships and Universiade selected the 0.6-point skill as the parallel bars dismount element, and for the Taiwanese National Games and the National Intercollegiate Athletic Games, most of the gymnasts performed the 0.4-point dismount skill. iii) The result of the Chi-square test has shown that there was a significant difference in the selection of parallel bars dismount skill. The INT group used the E or E+ difficulty element as the dismount skill, and the NAT group selected the D or D- difficulty element. Conclusion— The level of parallel bars dismount in Taiwanese gymnastics is inferior to elite international gymnastics. It is suggested that Taiwanese gymnastics must try to practice the F difficulty dismount (double salto forward tucked with half twist) in the future.

Keywords: Artistic Gymnastics World Championships, dismount, difficulty score, element

Procedia PDF Downloads 142
1612 Lateral Torsional Buckling: Tests on Glued Laminated Timber Beams

Authors: Vera Wilden, Benno Hoffmeister, Markus Feldmann

Abstract:

Glued laminated timber (glulam) is a preferred choice for long span girders, e.g., for gyms or storage halls. While the material provides sufficient strength to resist the bending moments, large spans lead to increased slenderness of such members and to a higher susceptibility to stability issues, in particular to lateral torsional buckling (LTB). Rules for the determination of the ultimate LTB resistance are provided by Eurocode 5. The verifications of the resistance may be performed using the so called equivalent member method or by means of theory 2nd order calculations (direct method), considering equivalent imperfections. Both methods have significant limitations concerning their applicability; the equivalent member method is limited to rather simple cases; the direct method is missing detailed provisions regarding imperfections and requirements for numerical modeling. In this paper, the results of a test series on slender glulam beams in three- and four-point bending are presented. The tests were performed in an innovative, newly developed testing rig, allowing for a very precise definition of loading and boundary conditions. The load was introduced by a hydraulic jack, which follows the lateral deformation of the beam by means of a servo-controller, coupled with the tested member and keeping the load direction vertically. The deformation-controlled tests allowed for the identification of the ultimate limit state (governed by elastic stability) and the corresponding deformations. Prior to the tests, the structural and geometrical imperfections were determined and used later in the numerical models. After the stability tests, the nearly undamaged members were tested again in pure bending until reaching the ultimate moment resistance of the cross-section. These results, accompanied by numerical studies, were compared to resistance values obtained using both methods according to Eurocode 5.

Keywords: experimental tests, glued laminated timber, lateral torsional buckling, numerical simulation

Procedia PDF Downloads 238
1611 Development of Interaction Diagram for Eccentrically Loaded Reinforced Concrete Sandwich Walls with Different Design Parameters

Authors: May Haggag, Ezzat Fahmy, Mohamed Abdel-Mooty, Sherif Safar

Abstract:

Sandwich sections have a very complex nature due to variability of behavior of different materials within the section. Cracking, crushing and yielding capacity of constituent materials enforces high complexity of the section. Furthermore, slippage between the different layers adds to the section complex behavior. Conventional methods implemented in current industrial guidelines do not account for the above complexities. Thus, a throughout study is needed to understand the true behavior of the sandwich panels thus, increase the ability to use them effectively and efficiently. The purpose of this paper is to conduct numerical investigation using ANSYS software for the structural behavior of sandwich wall section under eccentric loading. Sandwich walls studied herein are composed of two RC faces, a foam core and linking shear connectors. Faces are modeled using solid elements and reinforcement together with connectors are modeled using link elements. The analysis conducted herein is nonlinear static analysis incorporating material nonlinearity, crashing and crushing of concrete and yielding of steel. The model is validated by comparing it to test results in literature. After validation, the model is used to establish extensive parametric analysis to investigate the effect of three key parameters on the axial force bending moment interaction diagram of the walls. These parameters are the concrete compressive strength, face thickness and number of shear connectors. Furthermore, the results of the parametric study are used to predict a coefficient that links the interaction diagram of a solid wall to that of a sandwich wall. The equation is predicted using the parametric study data and regression analysis. The predicted α was used to construct the interaction diagram of the investigated wall and the results were compared with ANSYS results and showed good agreement.

Keywords: sandwich walls, interaction diagrams, numerical modeling, eccentricity, reinforced concrete

Procedia PDF Downloads 403
1610 Geomorphometric Analysis of the Hydrologic and Topographic Parameters of the Katsina-Ala Drainage Basin, Benue State, Nigeria

Authors: Oyatayo Kehinde Taofik, Ndabula Christopher

Abstract:

Drainage basins are a central theme in the green economy. The rising challenges in flooding, erosion or sediment transport and sedimentation threaten the green economy. This has led to increasing emphasis on quantitative analysis of drainage basin parameters for better understanding, estimation and prediction of fluvial responses and, thus associated hazards or disasters. This can be achieved through direct measurement, characterization, parameterization, or modeling. This study applied the Remote Sensing and Geographic Information System approach of parameterization and characterization of the morphometric variables of Katsina – Ala basin using a 30 m resolution Shuttle Radar Topographic Mission (SRTM) Digital Elevation Model (DEM). This was complemented with topographic and hydrological maps of Katsina-Ala on a scale of 1:50,000. Linear, areal and relief parameters were characterized. The result of the study shows that Ala and Udene sub-watersheds are 4th and 5th order basins, respectively. The stream network shows a dendritic pattern, indicating homogeneity in texture and a lack of structural control in the study area. Ala and Udene sub-watersheds have the following values for elongation ratio, circularity ratio, form factor and relief ratio: 0.48 / 0.39 / 0.35/ 9.97 and 0.40 / 0.35 / 0.32 / 6.0. They also have the following values for drainage texture and ruggedness index of 0.86 / 0.011 and 1.57 / 0.016. The study concludes that the two sub-watersheds are elongated, suggesting that they are susceptible to erosion and, thus higher sediment load in the river channels, which will dispose the watersheds to higher flood peaks. The study also concludes that the sub-watersheds have a very coarse texture, with good permeability of subsurface materials and infiltration capacity, which significantly recharge the groundwater. The study recommends that efforts should be put in place by the Local and State Governments to reduce the size of paved surfaces in these sub-watersheds by implementing a robust agroforestry program at the grass root level.

Keywords: erosion, flood, mitigation, morphometry, watershed

Procedia PDF Downloads 87
1609 Determining Factors for Opening Accounts, Customers’ Perception and Their Satisfaction Level Towards the First Security Islamic Bank of Bangladesh

Authors: Md. Akiz Uddin

Abstract:

This research attempted to identify the determining factors that extensively persuaded customers of the First Security Islamic Bank Limited (FSIBL) to open accounts and their perception and satisfaction level towards it. Initially, a theoretical model was established based on existing literature reviews. After that, a self-administered structured questionnaire was developed, and data were collected from 180 customers of the FSIBL of Bangladesh using purposive sampling technique. The collected data were later analyzed through a statistical software. Structural Equation Modelling (SEM) was used to verify the model of the study and test the hypotheses. The study particularly examined the determinants of opening accounts, customers’ perception and their satisfaction level towards the bank on several factors like the bank’s compliance with Shariah law, use of modern technology, assurance, reliability, empathy, profitability, and responsiveness. To examine the impact of religious belief on being FSIBL clients, the study also investigates non-Muslim clients’ perception about FSIBL. The study focused on FSIBL customers only from five branches of Dhaka city. The study found that the religious beliefs is the most significant factors for Muslim customers for considering FSIBL to open an account, and they are satisfied with the services, too. However, for non-Muslim customers, other benefits like E-banking, various user-friendly services are the most significant factors for choosing FSIBL. Their satisfaction level is also statistically significant. Furthermore, even if the non- Muslim customers didn’t consider religious beliefs as determinant factors for choosing FSIBL, the respondents informed that they have trust that people who believe in shariah law are more reliable to keep money with them. These findings open up the avenue for future researchers to conduct more study in this area through employing a larger sample size and more branches and extending the current model by incorporating new variables. The study will be an important addition to the potentials of Islamic banking system, literature of service quality and customer satisfaction level, particularly in the success of Islamic banking system in Bangladesh.

Keywords: islamic banking, customers’ satisfaction, customers’ perception, shariah law

Procedia PDF Downloads 76
1608 Clinical Comparative Study Comparing Efficacy of Intrathecal Fentanyl and Magnesium as an Adjuvant to Hyperbaric Bupivacaine in Mild Pre-Eclamptic Patients Undergoing Caesarean Section

Authors: Sanchita B. Sarma, M. P. Nath

Abstract:

Adequate analgesia following caesarean section decreases morbidity, hastens ambulation, improves patient outcome and facilitates care of the newborn. Intrathecal magnesium, an NMDA antagonist, has been shown to prolong analgesia without significant side effects in healthy parturients. The aim of this study was to evaluate the onset and duration of sensory and motor block, hemodynamic effect, postoperative analgesia, and adverse effects of magnesium or fentanyl given intrathecally with hyperbaric 0.5% bupivacaine in patients with mild preeclampsia undergoing caesarean section. Sixty women with mild preeclampsia undergoing elective caesarean section were included in a prospective, double blind, controlled trial. Patients were randomly assigned to receive spinal anesthesia with 2 mL 0.5% hyperbaric bupivacaine with 12.5 µg fentanyl (group F) or 0.1 ml of 50% magnesium sulphate (50 mg) (group M) with 0.15ml preservative free distilled water. Onset, duration and recovery of sensory and motor block, time to maximum sensory block, duration of spinal anaesthesia and postoperative analgesic requirements were studied. Statistical comparison was carried out using the Chi-square or Fisher’s exact tests and Independent Student’s t-test where appropriate. The onset of both sensory and motor block was slower in the magnesium group. The duration of spinal anaesthesia (246 vs. 284) and motor block (186.3 vs. 210) were significantly longer in the magnesium group. Total analgesic top up requirement was less in group M. Hemodynamic parameters were similar in both the groups. Intrathecal magnesium caused minimal side effects. Since Fentanyl and other opioid congeners are not available throughout the country easily, magnesium with its easy availability and less side effect profile can be a cost effective alternative to fentanyl in managing pregnancy induced hypertension (PIH) patients given along with Bupivacaine intrathecally in caesarean section.

Keywords: analgesia, magnesium, pre eclampsia, spinal anaesthesia

Procedia PDF Downloads 321
1607 University-home Partnerships for Enhancing Students’ Career Adapting Responses: A Moderated-mediation Model

Authors: Yin Ma, Xun Wang, Kelsey Austin

Abstract:

Purpose – Building upon career construction theory and the conservation of resources theory, we developed a moderated mediation model to examine how the perceived university support impact students’ career adapting responses, namely, crystallization, exploration, decision and preparation, via the mediator career adaptability and moderator perceived parental support. Design/methodology/approach – The multi-stage sampling strategy was employed and survey data were collected. Structural equation modeling was used to perform the analysis. Findings – Perceived university support could directly promote students’ career adaptability, and promote three career adapting responses, namely, exploration, decision and preparation. It could also impact four career adapting responses via mediation effect of career adaptability. Its impact on students’ career adaptability can greatly increase when students’ receive parental related career support. Research limitations/implications – The cross-sectional design limits causal inference. Conducted in China, our findings should be cautiously interpreted in other countries due to cultural differences. Practical implications – University support is vital to students’ career adaptability and supports from parents can enhance this process. University-home collaboration is necessary to promote students’ career adapting responses. For students, seeking and utilizing as much supporting resources as possible is vital for their human resources development. On an organizational level, universities could benefit from our findings by introducing the practices which ask students to rate the career-related courses and encourage them to chat with parents regularly. Originality/ value – Using recently developed scale, current work contributes to the literature by investigating the impact of multiple contextual factors on students’ career adapting response. It also provide the empirical support for the role of human intervention in fostering career adapting responses.

Keywords: career adapability, university and parental support, China studies, sociology of education

Procedia PDF Downloads 65
1606 Assessing Future Offshore Wind Farms in the Gulf of Roses: Insights from Weather Research and Forecasting Model Version 4.2

Authors: Kurias George, Ildefonso Cuesta Romeo, Clara Salueña Pérez, Jordi Sole Olle

Abstract:

With the growing prevalence of wind energy there is a need, for modeling techniques to evaluate the impact of wind farms on meteorology and oceanography. This study presents an approach that utilizes the WRF (Weather Research and Forecasting )with that include a Wind Farm Parametrization model to simulate the dynamics around Parc Tramuntana project, a offshore wind farm to be located near the Gulf of Roses off the coast of Barcelona, Catalonia. The model incorporates parameterizations for wind turbines enabling a representation of the wind field and how it interacts with the infrastructure of the wind farm. Current results demonstrate that the model effectively captures variations in temeperature, pressure and in both wind speed and direction over time along with their resulting effects on power output from the wind farm. These findings are crucial for optimizing turbine placement and operation thus improving efficiency and sustainability of the wind farm. In addition to focusing on atmospheric interactions, this study delves into the wake effects within the turbines in the farm. A range of meteorological parameters were also considered to offer a comprehensive understanding of the farm's microclimate. The model was tested under different horizontal resolutions and farm layouts to scrutinize the wind farm's effects more closely. These experimental configurations allow for a nuanced understanding of how turbine wakes interact with each other and with the broader atmospheric and oceanic conditions. This modified approach serves as a potent tool for stakeholders in renewable energy, environmental protection, and marine spatial planning. environmental protection and marine spatial planning. It provides a range of information regarding the environmental and socio economic impacts of offshore wind energy projects.

Keywords: weather research and forecasting, wind turbine wake effects, environmental impact, wind farm parametrization, sustainability analysis

Procedia PDF Downloads 72
1605 Absolute Lymphocyte Count as Predictor of Pneumocystis Pneumonia in Patients With Unknown HIV Status at a Private Tertiary Hospital

Authors: Marja A. Bernardo, Coreena A. Bueser, Cybele Lara R. Abad, Raul V. Destura

Abstract:

Pneumocystis jirovecii pneumonia (PCP) is the most common opportunistic infection among people with HIV. Early consideration of PCP should be made even in patients whose HIV status is unknown as delay in treatment may be fatal. The use of absolute lymphocyte count (ALC) has been suggested as an alternative predictor of PCP especially in resource limited settings where PCR testing is costly or delayed. Objective: To determine whether the absolute lymphocyte count (ALC) can be used as a screening tool to predict Pneumocystis pneumonia in patients with unknown HIV status admitted at a private tertiary hospital. Methods: A retrospective cross-sectional study was conducted at a private tertiary medical center. Inpatient medical records of patients aged 18 years old and above from January 2012 to May 2014, in whom a clinical diagnosis of Pneumocystis jirovecii pneumonia was made were reviewed for inclusion. Demographic data, clinical features, hospital course, PCP PCR and HIV results were recorded. Independent t-test and chi-square analysis was used to determine any statistical difference between PCP-positive and PCP-negative groups. Mann-Whitney U-test was used for comparison of hospital stay. Results: There were no statistically significant differences in baseline characteristics between PCP positive and negative groups. While both the percent lymphocyte count (0.14 ± 0.13 vs 0.21 ± 0.16) and ALC (1160 ± 528.67 vs 1493.70 ± 988.61) were lower for the PCP-positive group, only the percent lymphocyte count reached a statistically significant difference (p= 0.067 vs p= 0.042). Conclusion: A quick determination of the ALC may be useful as an additional parameter to help screen for and diagnose pneumocystis pneumonia. In our study, the ALC of patients with PCP appear to be lower than in patients without PCP. A low ALC (e.g. below 1200) may help with the decision regarding empiric treatment. However, it should be used in conjunction with the patient’s clinical presentation, as well as other diagnostic tests. Larger, prospective studies incorporating the ALC with other clinical predictors are necessary to optimally predict those who would benefit from empiric or expedited management for potential PCP.

Keywords: Pneumocystis carinii pneumonia, Absolute Lymphocyte Count, infection, PCP

Procedia PDF Downloads 349
1604 Treatment of Non-Small Cell Lung Cancer (NSCLC) With Activating Mutations Considering ctDNA Fluctuations

Authors: Moiseenko F. V., Volkov N. M., Zhabina A. S., Stepanova E. O., Kirillov A. V., Myslik A. V., Artemieva E. V., Agranov I. R., Oganesyan A. P., Egorenkov V. V., Abduloeva N. H., Aleksakhina S. Yu., Ivantsov A. O., Kuligina E. S., Imyanitov E. N., Moiseyenko V. M.

Abstract:

Analysis of ctDNA in patients with NSCLC is an emerging biomarker. Multiple research efforts of quantitative or at least qualitative analysis before and during the first periods of treatment with TKI showed the prognostic value of ctDNA clearance. Still, these important results are not incorporated in clinical standards. We evaluated the role of ctDNA in EGFR-mutated NSCLC receiving first-line TKI. Firstly, we analyzed sequential plasma samples from 30 patients that were collected before intake of the first tablet (at baseline) and at 6, 12, 24, 36, and 48 hours after the “starting point.” EGFR-M+ allele was measured by ddPCR. Afterward, we included sequential qualitative analysis of ctDNA with cobas® EGFR Mutation Test v2 from 99 NSCLC patients before the first dose, after 2 and 4 months of treatment, and on progression. Early response analysis showed the decline of EGFR-M+ level in plasma within the first 48 hours of treatment in 11 subjects. All these patients showed objective tumor response. 10 patients showed either elevation of EGFR-M+ plasma concentration (n = 5) or stable content of circulating EGFR-M+ after the start of the therapy (n = 5); only 3 of these patients achieved an objective response (p = 0.026) when compared to the former group). The rapid decline of plasma EGFR-M+ DNA concentration also predicted for longer PFS (13.7 vs. 11.4 months, p = 0.030). Long-term ctDNA monitoring showed clinically significant heterogeneity of EGFR-mutated NSCLC treated with 1st line TKIs in terms of progression-free and overall survival. Patients without detectable ctDNA at baseline (N = 32) possess the best prognosis on the duration of treatment (PFS: 24.07 [16.8-31.3] and OS: 56.2 [21.8-90.7] months). Those who achieve clearance after two months of TKI (N = 42) have indistinguishably good PFS (19.0 [13.7 – 24.2]). Individuals who retain ctDNA after 2 months (N = 25) have the worst prognosis (PFS: 10.3 [7.0 – 13.5], p = 0.000). 9/25 patients did not develop ctDNA clearance at 4 months with no statistical difference in PFS from those without clearance at 2 months. Prognostic heterogeneity of EGFR-mutated NSCLC should be taken into consideration in planning further clinical trials and optimizing the outcomes of patients.

Keywords: NSCLC, EGFR, targeted therapy, ctDNA, prognosis

Procedia PDF Downloads 55
1603 Modeling of Cf-252 and PuBe Neutron Sources by Monte Carlo Method in Order to Develop Innovative BNCT Therapy

Authors: Marta Błażkiewicz, Adam Konefał

Abstract:

Currently, boron-neutron therapy is carried out mainly with the use of a neutron beam generated in research nuclear reactors. This fact limits the possibility of realization of a BNCT in centers distant from the above-mentioned reactors. Moreover, the number of active nuclear reactors in operation in the world is decreasing due to the limited lifetime of their operation and the lack of new installations. Therefore, the possibilities of carrying out boron-neutron therapy based on the neutron beam from the experimental reactor are shrinking. However, the use of nuclear power reactors for BNCT purposes is impossible due to the infrastructure not intended for radiotherapy. Therefore, a serious challenge is to find ways to perform boron-neutron therapy based on neutrons generated outside the research nuclear reactor. This work meets this challenge. Its goal is to develop a BNCT technique based on commonly available neutron sources such as Cf-252 and PuBe, which will enable the above-mentioned therapy in medical centers unrelated to nuclear research reactors. Advances in the field of neutron source fabrication make it possible to achieve strong neutron fluxes. The current stage of research focuses on the development of virtual models of the above-mentioned sources using the Monte Carlo simulation method. In this study, the GEANT4 tool was used, including the model for simulating neutron-matter interactions - High Precision Neutron. Models of neutron sources were developed on the basis of experimental verification based on the activation detectors method with the use of indium foil and the cadmium differentiation method allowing to separate the indium activation contribution from thermal and resonance neutrons. Due to the large number of factors affecting the result of the verification experiment, the 10% discrepancy between the simulation and experiment results was accepted.

Keywords: BNCT, virtual models, neutron sources, monte carlo, GEANT4, neutron activation detectors, gamma spectroscopy

Procedia PDF Downloads 185
1602 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 114
1601 Increased Seedling Vigor Through Phytohomeopathy

Authors: Jasper Jose Zanco

Abstract:

Plants are affected by substances diluted below certain limits. In seeds subjected to ultra-high dilutions (UHD), according to phytohomeopathic methods, it is possible to reduce the concentrations to infinitesimal levels and the effects persist. This research aimed to test different potencies of UHD to modify the vigor of Eruca versicaria (L) Cav. seedlings. The research was carried out at the Plant Production Laboratory of UNISUL University in Santa Catarina, Brazil. Eight UHD treatments were tested, four drops for every 30 mL of distilled water: Control (70% alcohol - A70); Sulphur (S9), Acidum fluoridricum (A30), Calcarea carbonica (C200), Graphies naturalis (G200), Kali carbonicum (K100) Belladonna (B12), diluted and succussed in Hahnemannian centesimal standards. Succussion is a standard pharmaceutical method found in worldwide pharmaceuticals. The statistical design consisted of 50 seeds every 4 replicates per treatment, completely randomized, followed by ANOVA and Tukey's test. Succussion may integrate the high dilution of water treatments, even after successive dilutions, and the product of this process acts through physical-chemical and bioelectric stimuli, causing physiological responses at the cellular level, such as the activation of antioxidant systems, increased resistance to environmental stress or growth modulation. According to some researchers, these responses could be mediated by genetic expression changes or the plants' cellular signaling systems. The results showed significant differences between the control (A70) and the other treatments. Conductivity measurements were made in the seed germination water and impedance; the seedlings were measured for dry weight and total area. The highest conductivity occurred in the control treatment (27.8 μS/cm) and the lowest in K100 (21.3 μS/cm). After germination, on germitest paper, A70 was significantly different from G200 (<1%) and S9 (5%). Both homeopathies differed from the other treatments, with S9 obtaining the best germination (87.1%) and vigor index (IV=7.98) in relation to the other treatments. The control, A70, presented the lowest germination (63.9%) and vigor (IV=4.93).

Keywords: ultra high dilution, impedance, condutivity, eruca versicaria

Procedia PDF Downloads 18
1600 Research Related to the Academic Learning Stress, Reflected into PubMed Website Publications

Authors: Ramona-Niculina Jurcau, Ioana-Marieta Jurcau, Dong Hun Kwak, Nicolae-Alexandru Colceriu

Abstract:

Background: Academic environment led, in time, to the birth of some research subjects concluded with many publications. One of these issues is related to the learning stress. Thus far, the PubMed website displays an impressive number of papers related to the academic stress. Aims: Through this study, we aimed to evaluate the research concerning academic learning stress (ALS), by a retrospective analysis of PubMed publications. Methods: We evaluated the ALS, considering: a) different keywords as - ‘academic stress’ (AS), ‘academic stressors’ (ASs), ‘academic learning stress’ (ALS), ‘academic student stress’ (ASS), ‘academic stress college’ (ASC), ‘medical academic stress’ (MAS), ‘non-medical academic stress’ (NMAS), ‘student stress’ (SS), ‘nursing student stress’ (NS), ‘college student stress’ (CSS), ‘university student stress’ (USS), ‘medical student stress’ (MSS), ‘dental student stress’ (DSS), ‘non-medical student stress’ (NMSS), ‘learning students stress’ (LSS), ‘medical learning student stress’ (MLSS), ‘non-medical learning student stress’ (NMLSS); b) the year average for decades; c) some selection filters provided by PubMed website: Article types - Journal Article (JA), Clinical Trial (CT), Review (R); Species - Humans (H); Sex - Male (M) and Female (F); Ages - 13-18, 19-24, 19-44. Statistical evaluation was made on the basis of the Student test. Results: There were differences between keywords, referring to all filters. Nevertheless, for all keywords were noted the following: the majority of studies have indicated that subjects were humans; there were no important differences between the number of subjects M and F; the age of participants was mentioned only in some studies, predominating those with teenagers and subjects between 19-24 years. Conclusions: 1) PubMed publications document that concern for the research field of academic stress, lasts for 56 years and was materialized in more than 5.010 papers. 2) Number of publications in the field of academic stress varies depending on the selected keywords: those with a general framing (AS, ASs, ALS, ASS, SS, USS, LSS) are more numerous than those with a specific framing (ASC, MAS, NMAS, NS, CSS, MSS, DSS, NMSS, MLSS, NMLSS); those concerning the academic medical environment (MAS, NS, MSS, DSS, MLSS) prevailed compared to the non-medical environment (NMAS, NMSS, NMLSS). 3) Most of the publications are included at JA, of which a small percentage are CT and R. 4) Most of the academic stress studies were conducted with subjects both M and F, most aged under 19 years and between 19-24 years.

Keywords: academic stress, student stress, academic learning stress, medical student stress

Procedia PDF Downloads 562
1599 Development of Power System Stability by Reactive Power Planning in Wind Power Plant With Doubley Fed Induction Generators Generator

Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Oriol Gomis Bellmunt, Vinicius Albernaz Lacerda Freitas

Abstract:

The use of distributed and renewable sources in power systems has grown significantly, recently. One the most popular sources are wind farms which have grown massively. However, ¬wind farms are connected to the grid, this can cause problems such as reduced voltage stability, frequency fluctuations and reduced dynamic stability. Variable speed generators (asynchronous) are used due to the uncontrollability of wind speed specially Doubley Fed Induction Generators (DFIG). The most important disadvantage of DFIGs is its sensitivity to voltage drop. In the case of faults, a large volume of reactive power is induced therefore, use of FACTS devices such as SVC and STATCOM are suitable for improving system output performance. They increase the capacity of lines and also passes network fault conditions. In this paper, in addition to modeling the reactive power control system in a DFIG with converter, FACTS devices have been used in a DFIG wind turbine to improve the stability of the power system containing two synchronous sources. In the following paper, recent optimal control systems have been designed to minimize fluctuations caused by system disturbances, for FACTS devices employed. For this purpose, a suitable method for the selection of nine parameters for MPSH-phase-post-phase compensators of reactive power compensators is proposed. The design algorithm is formulated ¬¬as an optimization problem searching for optimal parameters in the controller. Simulation results show that the proposed controller Improves the stability of the network and the fluctuations are at desired speed.

Keywords: renewable energy sources, optimization wind power plant, stability, reactive power compensator, double-feed induction generator, optimal control, genetic algorithm

Procedia PDF Downloads 95