Search results for: test automation quality
1923 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 1241922 Advanced Compound Coating for Delaying Corrosion of Fast-Dissolving Alloy in High Temperature and Corrosive Environment
Authors: Lei Zhao, Yi Song, Tim Dunne, Jiaxiang (Jason) Ren, Wenhan Yue, Lei Yang, Li Wen, Yu Liu
Abstract:
Fasting dissolving magnesium (DM) alloy technology has contributed significantly to the “Shale Revolution” in oil and gas industry. This application requires DM downhole tools dissolving initially at a slow rate, rapidly accelerating to a high rate after certain period of operation time (typically 8 h to 2 days), a contradicting requirement that can hardly be addressed by traditional Mg alloying or processing itself. Premature disintegration has been broadly reported in downhole DM tool from field trials. To address this issue, “temporary” thin polymers of various formulations are currently coated onto DM surface to delay its initial dissolving. Due to conveying parts, harsh downhole condition, and high dissolving rate of the base material, the current delay coatings relying on pure polymers are found to perform well only at low temperature (typical < 100 ℃) and parts without sharp edges or corners, as severe geometries prevent high quality thin film coatings from forming effectively. In this study, a coating technology combining Plasma Electrolytic Oxide (PEO) coatings with advanced thin film deposition has been developed, which can delay DM complex parts (with sharp corners) in corrosive fluid at 150 ℃ for over 2 days. Synergistic effects between porous hard PEO coating and chemical inert elastic-polymer sealing leads to its delaying dissolution improvement, and strong chemical/physical bonding between these two layers has been found to play essential role. Microstructure of this advanced coating and compatibility between PEO and various polymer selections has been thoroughly investigated and a model is also proposed to explain its delaying performance. This study could not only benefit oil and gas industry to unplug their High Temperature High Pressure (HTHP) unconventional resources inaccessible before, but also potentially provides a technical route for other industries (e.g., bio-medical, automobile, aerospace) where primer anti-corrosive protection on light Mg alloy is highly demanded.Keywords: dissolvable magnesium, coating, plasma electrolytic oxide, sealer
Procedia PDF Downloads 1111921 Epoxomicin Affects Proliferating Neural Progenitor Cells of Rat
Authors: Bahaa Eldin A. Fouda, Khaled N. Yossef, Mohamed Elhosseny, Ahmed Lotfy, Mohamed Salama, Mohamed Sobh
Abstract:
Developmental neurotoxicity (DNT) entails the toxic effects imparted by various chemicals on the brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have their maximum effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS e.g. lead, however; most of the agents cannot be identified with certainty due the defective nature of predictive toxicology models used. A novel alternative method that can overcome most of the limitations of conventional techniques is the use of 3D neurospheres system. This in-vitro system can recapitulate most of the changes during the period of brain development making it an ideal model for predicting neurotoxic effects. In the present study, we verified the possible DNT of epoxomicin which is a naturally occurring selective proteasome inhibitor with anti-inflammatory activity. Rat neural progenitor cells were isolated from rat embryos (E14) extracted from placental tissue. The cortices were aseptically dissected out from the brains of the fetuses and the tissues were triturated by repeated passage through a fire-polished constricted Pasteur pipette. The dispersed tissues were allowed to settle for 3 min. The supernatant was, then, transferred to a fresh tube and centrifuged at 1,000 g for 5 min. The pellet was placed in Hank’s balanced salt solution cultured as free-floating neurospheres in proliferation medium. Two doses of epoxomicin (1µM and 10µM) were used in cultured neuropsheres for a period of 14 days. For proliferation analysis, spheres were cultured in proliferation medium. After 0, 4, 5, 11, and 14 days, sphere size was determined by software analyses. The diameter of each neurosphere was measured and exported to excel file further to statistical analysis. For viability analysis, trypsin-EDTA solution were added to neurospheres for 3 min to dissociate them into single cells suspension, then viability evaluated by the Trypan Blue exclusion test. Epoxomicin was found to affect proliferation and viability of neuropsheres, these effects were positively correlated to doses and progress of time. This study confirms the DNT effects of epoxomicin on 3D neurospheres model. The effects on proliferation suggest possible gross morphologic changes while the decrease in viability propose possible focal lesion on exposure to epoxomicin during early childhood.Keywords: neural progentor cells, epoxomicin, neurosphere, medical and health sciences
Procedia PDF Downloads 4271920 Study of Chemical and Physical - Mechanical Properties Lime Mortar with Addition of Natural Resins
Authors: I. Poot-Ocejo, H. Silva-Poot, J. C. Cruz, A. Yeladaqui-Tello
Abstract:
Mexico has remarkable archaeological remains mainly in the Maya area, which are critical to the preservation of our cultural heritage, so the authorities have an interest in preserving and restoring these vestiges of the most original way, by employing techniques traditional, which has advantages such as compatibility, durability, strength, uniformity and chemical composition. Recent studies have confirmed the addition of natural resins extracted from the bark of trees, of which Brosium alicastrum (Ramon) has been the most evaluated, besides being one of the most abundant species in the vicinity of the archaeological sites, like that Manilkara Zapota (Chicozapote). Therefore, the objective is to determine if these resins are capable of being employed in archaeological restoration. This study shows the results of the chemical composition and physical-mechanical behavior of mortar mixtures eight made with commercial lime and off by hand, calcium sand, resins added with Brosium alicastrum (Ramon) and Manilkara zapota (Chicozapote), where determined and quantified properties and chemical composition of the resins by X-Ray Fluorescence (XRF), the pH of the material was determined, indicating that both resins are acidic (3.78 and 4.02), and the addition rate maximum was obtained from resins in water by means of ultrasonic baths pulses, being in the case of 10% Manilkara zapota, because it contains up to 40% rubber and for 40% alicastrum Brosium contain less rubber. Through quantitative methodology, the compressive strength 96 specimens of 5 cm x 5 cm x 5 cm of mortar binding, 72 with partial substitution of water mixed with natural resins in proportions 5 to 10% in the case was evaluated of Manilkara Zapota, for Brosium alicastrum 20 and 40%, and 12 artificial resin and 12 without additive (mortars witnesses). 24 specimens likewise glued brick with mortar, for testing shear adhesion was found where, then the microstructure more conducive additions was determined by SEM analysis were prepared sweep. The test results indicate that the addition Manilkara zapota resin in the proportion of 10% 1.5% increase in compressive strength and 1% with respect to adhesion, compared to the control without addition mortar; In the case of Brosium alicastrum results show that compressive strengths and adhesion were insignificant compared to those made with registered by Manilkara zapota mixtures. Mortars containing the natural resins have improvements in physical properties and increase the mechanical strength and adhesion, compared to those who do not, in addition to the components are chemically compatible, therefore have considered that can be employed in Archaeological restoration.Keywords: lime, mortar, natural resins, Manilkara zapota mixtures, Brosium alicastrum
Procedia PDF Downloads 3711919 Fabrication and Characterisation of Additive Manufactured Ti-6Al-4V Parts by Laser Powder Bed Fusion Technique
Authors: Norica Godja, Andreas Schindel, Luka Payrits, Zsolt Pasztor, Bálint Hegedüs, Petr Homola, Jan Horňas, Jiří Běhal, Roman Ruzek, Martin Holzleitner, Sascha Senck
Abstract:
In order to reduce fuel consumption and CO₂ emissions in the aviation sector, innovative solutions are being sought to reduce the weight of aircraft, including additive manufacturing (AM). Of particular importance are the excellent mechanical properties that are required for aircraft structures. Ti6Al4V alloys, with their high mechanical properties in relation to weight, can reduce the weight of aircraft structures compared to structures made of steel and aluminium. Currently, conventional processes such as casting and CNC machining are used to obtain the desired structures, resulting in high raw material removal, which in turn leads to higher costs and impacts the environment. Additive manufacturing (AM) offers advantages in terms of weight, lead time, design, and functionality and enables the realisation of alternative geometric shapes with high mechanical properties. However, there are currently technological shortcomings that have led to AM not being approved for structural components with high safety requirements. An assessment of damage tolerance for AM parts is required, and quality control needs to be improved. Pores and other defects cannot be completely avoided at present, but they should be kept to a minimum during manufacture. The mechanical properties of the manufactured parts can be further improved by various treatments. The influence of different treatment methods (heat treatment, CNC milling, electropolishing, chemical polishing) and operating parameters were investigated by scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM/EDX), X-ray diffraction (XRD), electron backscatter diffraction (EBSD) and measurements with a focused ion beam (FIB), taking into account surface roughness, possible anomalies in the chemical composition of the surface and possible cracks. The results of the characterisation of the constructed and treated samples are discussed and presented in this paper. These results were generated within the framework of the 3TANIUM project, which is financed by EU with the contract number 101007830.Keywords: Ti6Al4V alloys, laser powder bed fusion, damage tolerance, heat treatment, electropolishing, potential cracking
Procedia PDF Downloads 851918 Antibacterial Effect of Silver Diamine Fluoride Incorporated in Fissure Sealants
Authors: Nélio Veiga, Paula Ferreira, Tiago Correia, Maria J. Correia, Carlos Pereira, Odete Amaral, Ilídio J. Correia
Abstract:
Introduction: The application of fissure sealants is considered to be an important primary prevention method used in dental medicine. However, the formation of microleakage gaps between tooth enamel and the fissure sealant applied is one of the most common reasons of dental caries development in teeth with fissure sealants. The association between various dental biomaterials may limit the major disadvantages and limitations of biomaterials functioning in a complementary manner. The present study consists in the incorporation of a cariostatic agent – silver diamine fluoride (SDF) – in a resin-based fissure sealant followed by the study of release kinetics by spectrophotometry analysis of the association between both biomaterials and assessment of the inhibitory effect on the growth of the reference bacterial strain Streptococcus mutans (S. mutans) in an in vitro study. Materials and Methods: An experimental in vitro study was designed consisting in the entrapment of SDF (Cariestop® 12% and 30%) into a commercially available fissure sealant (Fissurit®), by photopolymerization and photocrosslinking. The same sealant, without SDF was used as a negative control. The effect of the sealants on the growth of S. mutans was determined by the presence of bacterial inhibitory halos in the cultures at the end of the incubation period. In order to confirm the absence of bacteria in the surface of the materials, Scanning Electron Microscopy (SEM) characterization was performed. Also, to analyze the release profile of SDF along time, spectrophotometry technique was applied. Results: The obtained results indicate that the association of SDF to a resin-based fissure sealant may be able to increase the inhibition of S. mutans growth. However, no SDF release was noticed during the in vitro release studies and no statistical significant difference was verified when comparing the inhibitory halo sizes obtained for test and control group. Conclusions: In this study, the entrapment of SDF in the resin-based fissure sealant did not potentiate the antibacterial effect of the fissure sealant or avoid the immediate development of dental caries. The development of more laboratorial research and, afterwards, long-term clinical data are necessary in order to verify if this association between these biomaterials is effective and can be considered for being used in oral health management. Also, other methodologies for associating cariostatic agents and sealant should be addressed.Keywords: biomaterial, fissure sealant, primary prevention, silver diamine fluoride
Procedia PDF Downloads 2591917 Comparison of Finite Difference Schemes for Numerical Study of Ripa Model
Authors: Sidrah Ahmed
Abstract:
The river and lakes flows are modeled mathematically by shallow water equations that are depth-averaged Reynolds Averaged Navier-Stokes equations under Boussinesq approximation. The temperature stratification dynamics influence the water quality and mixing characteristics. It is mainly due to the atmospheric conditions including air temperature, wind velocity, and radiative forcing. The experimental observations are commonly taken along vertical scales and are not sufficient to estimate small turbulence effects of temperature variations induced characteristics of shallow flows. Wind shear stress over the water surface influence flow patterns, heat fluxes and thermodynamics of water bodies as well. Hence it is crucial to couple temperature gradients with shallow water model to estimate the atmospheric effects on flow patterns. The Ripa system has been introduced to study ocean currents as a variant of shallow water equations with addition of temperature variations within the flow. Ripa model is a hyperbolic system of partial differential equations because all the eigenvalues of the system’s Jacobian matrix are real and distinct. The time steps of a numerical scheme are estimated with the eigenvalues of the system. The solution to Riemann problem of the Ripa model is composed of shocks, contact and rarefaction waves. Solving Ripa model with Riemann initial data with the central schemes is difficult due to the eigen structure of the system.This works presents the comparison of four different finite difference schemes for the numerical solution of Riemann problem for Ripa model. These schemes include Lax-Friedrichs, Lax-Wendroff, MacCormack scheme and a higher order finite difference scheme with WENO method. The numerical flux functions in both dimensions are approximated according to these methods. The temporal accuracy is achieved by employing TVD Runge Kutta method. The numerical tests are presented to examine the accuracy and robustness of the applied methods. It is revealed that Lax-Freidrichs scheme produces results with oscillations while Lax-Wendroff and higher order difference scheme produce quite better results.Keywords: finite difference schemes, Riemann problem, shallow water equations, temperature gradients
Procedia PDF Downloads 2031916 The Role of Autophagy Modulation in Angiotensin-II Induced Hypertrophy
Authors: Kitti Szoke, Laszlo Szoke, Attila Czompa, Arpad Tosaki, Istvan Lekli
Abstract:
Autophagy plays an important role in cardiac hypertrophy, which is one of the most common causes of heart failure in the world. This self-degradative catabolic process, responsible for protein quality control, balancing sources of energy at critical times, and elimination of damaged organelles. The autophagic activity can be triggered by starvation, oxidative stress, or pharmacological agents, like rapamycin. This induced autophagy can promote cell survival during starvation or pathological stress. In this study, it is investigated the effect of the induced autophagic process on angiotensin induced hypertrophic H9c2 cells. In our study, it is used H9c2 cells as an in vitro model. To induce hypertrophy, cells were treated with 10000 nM angiotensin-II, and to activate autophagy, 100 nM rapamycin treatment was used. The following groups were formed: 1: control, 2: 10000 nM AT-II, 3: 100 nM rapamycin, 4: 100 nM rapamycin pretreatment then 10000 nM AT-II. The cell viability was examined via MTT (cell proliferation assay) assay. The cells were stained with rhodamine-conjugated phalloidin and DAPI to visualize F-actin filaments and cell nuclei then the cell size alteration was examined in a fluorescence microscope. Furthermore, the expression levels of autophagic and apoptotic proteins such as Beclin-1, p62, LC3B-II, Cleaved Caspase-3 were evaluated by Western blot. MTT assay result suggests that the used pharmaceutical agents in the tested concentrations did not have a toxic effect; however, at group 3, a slight decrement was detected in cell viability. In response to AT-II treatment, a significant increase was detected in the cell size; cells became hypertrophic. However, rapamycin pretreatment slightly reduced the cell size compared to group 2. Western blot results showed that AT-II treatment-induced autophagy, because the increased expression of Beclin-1, p62, LC3B-II were observed. However, due to the incomplete autophagy, the apoptotic Cleaved Caspase-3 expression also increased. Rapamycin pretreatment up-regulated Beclin-1 and LC3B-II, down-regulated p62 and Cleaved Caspase-3, indicating that rapamycin-induced autophagy can restore the normal autophagic flux. Taken together, our results suggest that rapamycin activated autophagy reduces angiotensin-II induced hypertrophy.Keywords: angiotensin-II, autophagy, H9c2 cell line, hypertrophy, rapamycin
Procedia PDF Downloads 1471915 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering
Authors: R. Nandhini, Gaurab Mudbhari
Abstract:
Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.Keywords: machine learning, deep learning, image classification, image clustering
Procedia PDF Downloads 121914 Exploration of the Possible Link Between Emotional Problems and Cholesterol Levels Among Children Diagnosed with Attention-Deficit Hyperactivity Disorder
Authors: Rosa S. Wong, Keith T.S. Tung, H.W. Tsang, Frederick K. Ho, Patrick Ip
Abstract:
Attention-deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder characterized by inattention and hyperactive-impulsive behavior. Evidence shows that ADHD and mood problems such as depression and anxiety often co-occur and yet not everyone with ADHD reported elevated emotional problems. Given that cholesterol is essential for healthy brain development including the regions governing emotion regulation, reports found lower cholesterol levels in patients with major depressive disorder and those with suicide attempt behavior compared to healthy subjects. This study explored whether ADHD adolescents experienced more emotional problems and whether emotional problems correlated with cholesterol levels in these adolescents. This study used a portion of data from the longitudinal cohort study which was designed to investigate the long-term impact of family socioeconomic status on child development. In 2018/19, parents of 300 adolescents (average age: 12.57+/-0.49 years) were asked to rate their children’s emotional problems and report whether their children had doctor-diagnosed psychiatric diseases. We further collected blood samples from 263 children to study their lipid profile (total cholesterol, high-density lipoprotein (HDL)-cholesterol, and low-density lipoprotein (LDL)-cholesterol). Regression analyses were performed to test the relationships between variables of interest. Among 300 children, 27 (9%) had ADHD diagnosis. Analysis based on overall sample found no association between ADHD and emotional problems, but when investigating the relationship by gender, there was a significant interaction effect of ADHD and gender on emotional problems (p=0.037), with ADHD males displaying more emotional problems than ADHD females. Further analyses based on 263 children (21 with ADHD diagnosis) found significant interaction effect of ADHD and gender on total cholesterol (p=0.038) and low LDL-cholesterol levels (p=0.013) after adjusting for the child’s physical disease history. Specifically, ADHD males had significantly lower total cholesterol and low lipoprotein-cholesterol levels than ADHD females. In ADHD males, more emotional problems were associated with lower LDL-cholesterol levels (B = -4.26, 95%CI (-7.46, -1.07), p=0.013). We found preliminary support for the association between more emotional problems and lower cholesterol levels in ADHD children, especially among males. Although larger prospective studies are needed to substantiate these claims, the evidence highlights the importance of healthy lifestyle to keep cholesterol levels in normal range which can have positive effects on physical and mental health.Keywords: attention-deficit hyperactivity disorder, cholesterol, emotional problems, adolescents
Procedia PDF Downloads 1491913 Intensity Modulated Radiotherapy of Nasopharyngeal Carcinomas: Patterns of Loco Regional Relapse
Authors: Omar Nouri, Wafa Mnejja, Nejla Fourati, Fatma Dhouib, Wicem Siala, Ilhem Charfeddine, Afef Khanfir, Jamel Daoud
Abstract:
Background and objective: Induction chemotherapy (IC) followed by concomitant chemo radiotherapy with intensity modulated radiation (IMRT) technique is actually the recommended treatment modality for locally advanced nasopharyngeal carcinomas (NPC). The aim of this study was to evaluate the prognostic factors predicting loco regional relapse with this new treatment protocol. Patients and methods: A retrospective study of 52 patients with NPC treated between June 2016 and July 2019. All patients received IC according to the protocol of the Head and Neck Radiotherapy Oncology Group (Gortec) NPC 2006 (3 TPF courses) followed by concomitant chemo radiotherapy with weekly cisplatin (40 mg / m2). Patients received IMRT with integrated simultaneous boost (SIB) of 33 daily fractions at a dose of 69.96 Gy for high-risk volume, 60 Gy for intermediate risk volume and 54 Gy for low-risk volume. Median age was 49 years (19-69) with a sex ratio of 3.3. Forty five tumors (86.5%) were classified as stages III - IV according to the 2017 UICC TNM classification. Loco regional relapse (LRR) was defined as a local and/or regional progression that occurs at least 6 months after the end of treatment. Survival analysis was performed according to Kaplan-Meier method and Log-rank test was used to compare anatomy clinical and therapeutic factors that may influence loco regional free survival (LRFS). Results: After a median follow up of 42 months, 6 patients (11.5%) experienced LRR. A metastatic relapse was also noted for 3 of these patients (50%). Target volumes coverage was optimal for all patient with LRR. Four relapses (66.6%) were in high-risk target volume and two (33.3%) were borderline. Three years LRFS was 85,9%. Four factors predicted loco regional relapses: histologic type other than undifferentiated (UCNT) (p=0.027), a macroscopic pre chemotherapy tumor volume exceeding 100 cm³ (p=0.005), a reduction in IC doses exceeding 20% (p=0.016) and a total cumulative cisplatin dose less than 380 mg/m² (p=0.0.34). TNM classification and response to IC did not impact loco regional relapses. Conclusion: For nasopharyngeal carcinoma, tumors with initial high volume and/or histologic type other than UCNT, have a higher risk of loco regional relapse. Therefore, they require a more aggressive therapeutic approaches and a suitable monitoring protocol.Keywords: loco regional relapse, modulation intensity radiotherapy, nasopharyngeal carcinoma, prognostic factors
Procedia PDF Downloads 1281912 Advancements in AI Training and Education for a Future-Ready Healthcare System
Authors: Shamie Kumar
Abstract:
Background: Radiologists and radiographers (RR) need to educate themselves and their colleagues to ensure that AI is integrated safely, useful, and in a meaningful way with the direction it always benefits the patients. AI education and training are fundamental to the way RR work and interact with it, such that they feel confident using it as part of their clinical practice in a way they understand it. Methodology: This exploratory research will outline the current educational and training gaps for radiographers and radiologists in AI radiology diagnostics. It will review the status, skills, challenges of educating and teaching. Understanding the use of artificial intelligence within daily clinical practice, why it is fundamental, and justification on why learning about AI is essential for wider adoption. Results: The current knowledge among RR is very sparse, country dependent, and with radiologists being the majority of the end-users for AI, their targeted training and learning AI opportunities surpass the ones available to radiographers. There are many papers that suggest there is a lack of knowledge, understanding, and training of AI in radiology amongst RR, and because of this, they are unable to comprehend exactly how AI works, integrates, benefits of using it, and its limitations. There is an indication they wish to receive specific training; however, both professions need to actively engage in learning about it and develop the skills that enable them to effectively use it. There is expected variability amongst the profession on their degree of commitment to AI as most don’t understand its value; this only adds to the need to train and educate RR. Currently, there is little AI teaching in either undergraduate or postgraduate study programs, and it is not readily available. In addition to this, there are other training programs, courses, workshops, and seminars available; most of these are short and one session rather than a continuation of learning which cover a basic understanding of AI and peripheral topics such as ethics, legal, and potential of AI. There appears to be an obvious gap between the content of what the training program offers and what the RR needs and wants to learn. Due to this, there is a risk of ineffective learning outcomes and attendees feeling a lack of clarity and depth of understanding of the practicality of using AI in a clinical environment. Conclusion: Education, training, and courses need to have defined learning outcomes with relevant concepts, ensuring theory and practice are taught as a continuation of the learning process based on use cases specific to a clinical working environment. Undergraduate and postgraduate courses should be developed robustly, ensuring the delivery of it is with expertise within that field; in addition, training and other programs should be delivered as a way of continued professional development and aligned with accredited institutions for a degree of quality assurance.Keywords: artificial intelligence, training, radiology, education, learning
Procedia PDF Downloads 851911 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.Keywords: analytics, digitization, industry 4.0, manufacturing
Procedia PDF Downloads 1111910 How Technology Can Help Teachers in Reflective Practice
Authors: Ambika Perisamy, Asyriawati binte Mohd Hamzah
Abstract:
The focus of this presentation is to discuss teacher professional development (TPD) through the use of technology. TPD is necessary to prepare teachers for future challenges they will face throughout their careers and to develop new skills and good teaching practices. We will also be discussing current issues in embracing technology in the field of early childhood education and the impact on the professional development of teachers. Participants will also learn to apply teaching and learning practices through the use of technology. One major objective of this presentation is to coherently fuse practical, technology and theoretical content. The process begins by concretizing a set of preconceived ideas which need to be joined with theoretical justifications found in the literature. Technology can make observations fairer and more reliable, easier to implement, and more preferable to teachers and principals. Technology will also help principals to improve classroom observations of teachers and ultimately improve teachers’ continuous professional development. Video technology allows the early childhood teachers to record and keep the recorded video for reflection at any time. This will also provide opportunities for her to share with her principals for professional dialogues and continuous professional development plans. A total of 10 early childhood teachers and 4 principals were involved in these efforts which identified and analyze the gaps in the quality of classroom observations and its co relation to developing teachers as reflective practitioners. The methodology used involves active exploration with video technology recordings, conversations, interviews and authentic teacher child interactions which forms the key thrust in improving teaching and learning practice. A qualitative analysis of photographs, videos, transcripts which illustrates teacher’s reflections and classroom observation checklists before and after the use of video technology were adopted. Arguably, although PD support can be magnanimously strong, if teachers could not connect or create meaning out of the opportunities made available to them, they may remain passive or uninvolved. Therefore, teachers must see the value of applying new ideas such as technology and approaches to practice while creating personal meaning out of professional development. These video recordings are transferable, can be shared and edited through social media, emails and common storage between teachers and principals. To conclude the importance of reflective practice among early childhood teachers and addressing the concerns raised before and after the use of video technology, teachers and principals shared the feasibility, practical and relevance use of video technology.Keywords: early childhood education, reflective, improve teaching and learning, technology
Procedia PDF Downloads 5021909 Hand Movements and the Effect of Using Smart Teaching Aids: Quality of Writing Styles Outcomes of Pupils with Dysgraphia
Authors: Sadeq Al Yaari, Muhammad Alkhunayn, Sajedah Al Yaari, Adham Al Yaari, Ayman Al Yaari, Montaha Al Yaari, Ayah Al Yaari, Fatehi Eissa
Abstract:
Dysgraphia is a neurological disorder of written expression that impairs writing ability and fine motor skills, resulting primarily in problems relating not only to handwriting but also to writing coherence and cohesion. We investigate the properties of smart writing technology to highlight some unique features of the effects they cause on the academic performance of pupils with dysgraphia. In Amis, dysgraphics undergo writing problems to express their ideas due to ordinary writing aids, as the default strategy. The Amis data suggests a possible connection between available writing aids and pupils’ writing improvement; therefore, texts’ expression and comprehension. A group of thirteen dysgraphic pupils were placed in a regular classroom of primary school, with twenty-one pupils being recruited in the study as a control group. To ensure validity, reliability and accountability to the research, both groups studied writing courses for two semesters, of which the first was equipped with smart writing aids while the second took place in an ordinary classroom. Two pre-tests were undertaken at the beginning of the first two semesters, and two post-tests were administered at the end of both semesters. Tests examined pupils’ ability to write coherent, cohesive and expressive texts. The dysgraphic group received the treatment of a writing course in the first semester in classes with smart technology and produced significantly greater increases in writing expression than in an ordinary classroom, and their performance was better than that of the control group in the second semester. The current study concludes that using smart teaching aids is a ‘MUST’, both for teaching and learning dysgraphia. Furthermore, it is demonstrated that for young dysgraphia, expressive tasks are more challenging than coherent and cohesive tasks. The study, therefore, supports the literature suggesting a role for smart educational aids in writing and that smart writing techniques may be an efficient addition to regular educational practices, notably in special educational institutions and speech-language therapeutic facilities. However, further research is needed to prompt the adults with dysgraphia more often than is done to the older adults without dysgraphia in order to get them to finish the other productive and/or written skills tasks.Keywords: smart technology, writing aids, pupils with dysgraphia, hands’ movement
Procedia PDF Downloads 381908 The Non-Motor Symptoms of Filipino Patients with Parkinson’s Disease
Authors: Cherrie Mae S. Sia, Noel J. Belonguel, Jarungchai Anton S. Vatanagul
Abstract:
Background: Parkinson’s disease (PD) is a chronic progressive, neurodegenerative disorder known for its motor symptoms such as bradykinesia, resting tremor, muscle rigidity, and postural instability. Patients with PD also experience non-motor symptoms (NMS) such as depression, fatigue, and sleep disturbances that are most of the time unrecognized by clinicians. This may be due to the lack of spontaneous reports from the patients or partly because of the lack of systematic questioning from the healthcare professional. There is limited data with regards to these NMS especially that of Filipino patients with PD. Objectives: This study aims to determine the non-motor symptoms of Filipino patients with Parkinson’s disease. Materials and Methods: This is a prospective, cohort study involving thirty-four patients of Filipino-descent diagnosed with PD in three out-patient clinics in Cebu City from April to September 2014. Each patient was interviewed using the Non-Motor Symptom Scale (NMSS). A Cebuano version of the NMSS was also provided for the non-English speaking patients. Interview time was approximately ten to fifteen minutes for each respondent. Results: Of the thirty-four patients with Parkinson’s disease, majority was noted to be males (N=19) and the disease was noted to be more prevalent in patients with a mean age of 62 (SD±9) years old. Hypertension (59%) and diabetes mellitus (29%) were the common co-morbidities in the study population. All patients presented more than one NMS, with insomnia (41.2%), poor memory (23.5%) and depression (14.7%) being the first non-motor symptoms to occur. Symptoms involving mood/cognition (mean=2.21), and attention/memory (mean=2.05) were noted to be the most frequent and of moderate severity. Based on the NMSS, the symptoms that were noted to be mild and often to occur were those that involved the mood/cognition (score=3.84), attention/memory (score=3.50), and sleep/fatigue (score=3.00) domains. Levodopa-Carbidopa, Ropinirole, and Pramipexole were the most frequently used medications in the study population. Conclusion: Non-motor symptoms (NMS) are common in patients with Parkinson’s disease (PD). They appear at the time of diagnosis of PD or even before the motor symptoms manifest. The earliest non-motor symptoms to occur are insomnia, poor memory, and depression. Those pertaining to mood/cognition and attention/memory are the most frequent NMS and they are of moderate severity. Identifying these NMS by doing a questionnaire-guided interview such as the Non-Motor Symptom Scale (NMSS) before they can become more severe and affect the patient’s quality of life is a must for every clinician caring for a PD patient. Early treatment and control of these NMS can then be given, hence, improving the patient’s outcome and prognosis.Keywords: non motor symptoms, Parkinson's Disease, insomnia, depression
Procedia PDF Downloads 4481907 Fahr Dsease vs Fahr Syndrome in the Field of a Case Report
Authors: Angelis P. Barlampas
Abstract:
Objective: The confusion of terms is a common practice in many situations of the everyday life. But, in some circumstances, such as in medicine, the precise meaning of a word curries a critical role for the health of the patient. Fahr disease and Fahr syndrome are often falsely used interchangeably, but they are two different conditions with different physical histories of different etiology and different medical management. A case of the seldom Fahr disease is presented, and a comparison with the more common Fahr syndrome follows. Materials and method: A 72 years old patient came to the emergency department, complaining of some kind of non specific medal disturbances, like anxiety, difficulty of concentrating, and tremor. The problems had a long course, but he had the impression of getting worse lately, so he decided to check them. Past history and laboratory tests were unremarkable. Then, a computed tomography examination was ordered. Results: The CT exam showed bilateral, hyperattenuating areas of heavy, dense calcium type deposits in basal ganglia, striatum, pallidum, thalami, the dentate nucleus, and the cerebral white matter of frontal, parietal and iniac lobes, as well as small areas of the pons. Taking into account the absence of any known preexisting illness and the fact that the emergency laboratory tests were without findings, a hypothesis of the rare Fahr disease was supposed. The suspicion was confirmed with further, more specific tests, which showed the lack of any other conditions which could probably share the same radiological image. Differentiating between Fahr disease and Fahr syndrome. Fahr disease: Primarily autosomal dominant Symmetrical and bilateral intracranial calcifications The patient is healthy until the middle age Absence of biochemical abnormalities. Family history consistent with autosomal dominant Fahr syndrome :Earlier between 30 to 40 years old. Symmetrical and bilateral intracranial calcifications Endocrinopathies: Idiopathic hypoparathyroidism, secondary hypoparathyroidism, hyperparathyroidism, pseudohypoparathyroidism ,pseudopseudohypoparathyroidism, e.t.c The disease appears at any age There are abnormal laboratory or imaging findings. Conclusion: Fahr disease and Fahr syndrome are not the same illness, although this is not well known to the inexperienced doctors. As clinical radiologists, we have to inform our colleagues that a radiological image, along with the patient's history, probably implies a rare condition and not something more usual and prompt the investigation to the right route. In our case, a genetic test could be done earlier and reveal the problem, and thus avoiding unnecessary and specific tests which cost in time and are uncomfortable to the patient.Keywords: fahr disease, fahr syndrome, CT, brain calcifications
Procedia PDF Downloads 621906 Association between Appearance Schemas and Personality
Authors: Berta Rodrigues Maia, Mariana Marques, Frederica Carvalho
Abstract:
Introduction: Personality traits play is related to many forms of psychological distress, such as body dissatisfaction. Aim: To explore the associations between appearance schemas and personality traits. Method: 494 Portuguese university students (80.2% females, and 99.2% single), with a mean age of 20.17 years old (SD = 1.77; range: 18-20), filled in the appearance schemas inventory-revised, the NEO personality inventory (a Portuguese short version), and the composite multidimensional perfectionism scale. Results: An independent-samples t-test was conducted to compare the scores in appearance schemas by sex, with a significant difference being found in self-evaluation salience scores [females (M = 37.99, SD = 7.82); males (M = 35.36, SD = 6.60); t (489) = -3.052, p = .002]. Finally, there was no significant difference in motivational salience scores, by sex [females (M = 27.67, SD = 4.84); males (M = 26.70, SD = 4.99); t (489) = -1.748, p = .081]. Having conducted correlations separately, by sex, self-evaluation salience was positively correlated with concern over mistakes (r = .27), doubts about actions (r = .35), and socially prescribed perfectionism (r = .23). moreover, for females, self-evaluation salience was positively correlated with concern over mistakes (r = .34), personal standards (r = .25), doubts about actions (r = .33), parental expectations (r = .24), parental criticism (r = .24), organization (r = .11), socially prescribed perfectionism (r = .31), self-oriented perfectionism (r = .32), and neuroticism (r = .33). concerning motivational salience, in the total sample (not separately, by sex), this scale/dimension significantly correlated with conscientiousness (r = . 18), personal standards (r = .23), socially prescribed perfectionism (r = . 10), and self-oriented perfectionism (r = .29). All correlations were significant at a level of significance of 0.01 (2-tailed), except for socially prescribed perfectionism. All the other correlations (with neuroticism, extroversion, openness, agreeableness, concern over mistakes, doubts about actions, parental expectations, and parental criticism) were not significant. Conclusions: Females seem to value more their self-appearance than males, and, in females, the salience of appearance in life seems to be associated with maladaptive perfectionism, as well as with adaptive perfectionism. In males, the salience of appearance was only related to adaptive perfectionism. These results seem to show that males are more concerned with their own standards regarding appearance, while for females, other's standards are also relevant. In females, the level of the salience of appearance in life seems to relate to the experience of feelings, such as anxiety and depression (neuroticism). The motivation to improve appearance seemed to be particularly related, in both sexes, to adaptive perfectionism (in a general way concerning more the personal standards). Longitudinal studies are needed to clarify the causality of the results. Acknowledgment: This study was carried out under the strategic project of the Centre for Philosophical and Humanistic Studies (CEFH) UID/FIL/00683/2019, funded by the Fundação para a Ciência e a Tecnologia (FCT).Keywords: appearance schemas, personality traits, university students, sex
Procedia PDF Downloads 1291905 Prediction of Sepsis Illness from Patients Vital Signs Using Long Short-Term Memory Network and Dynamic Analysis
Authors: Marcio Freire Cruz, Naoaki Ono, Shigehiko Kanaya, Carlos Arthur Mattos Teixeira Cavalcante
Abstract:
The systems that record patient care information, known as Electronic Medical Record (EMR) and those that monitor vital signs of patients, such as heart rate, body temperature, and blood pressure have been extremely valuable for the effectiveness of the patient’s treatment. Several kinds of research have been using data from EMRs and vital signs of patients to predict illnesses. Among them, we highlight those that intend to predict, classify, or, at least identify patterns, of sepsis illness in patients under vital signs monitoring. Sepsis is an organic dysfunction caused by a dysregulated patient's response to an infection that affects millions of people worldwide. Early detection of sepsis is expected to provide a significant improvement in its treatment. Preceding works usually combined medical, statistical, mathematical and computational models to develop detection methods for early prediction, getting higher accuracies, and using the smallest number of variables. Among other techniques, we could find researches using survival analysis, specialist systems, machine learning and deep learning that reached great results. In our research, patients are modeled as points moving each hour in an n-dimensional space where n is the number of vital signs (variables). These points can reach a sepsis target point after some time. For now, the sepsis target point was calculated using the median of all patients’ variables on the sepsis onset. From these points, we calculate for each hour the position vector, the first derivative (velocity vector) and the second derivative (acceleration vector) of the variables to evaluate their behavior. And we construct a prediction model based on a Long Short-Term Memory (LSTM) Network, including these derivatives as explanatory variables. The accuracy of the prediction 6 hours before the time of sepsis, considering only the vital signs reached 83.24% and by including the vectors position, speed, and acceleration, we obtained 94.96%. The data are being collected from Medical Information Mart for Intensive Care (MIMIC) Database, a public database that contains vital signs, laboratory test results, observations, notes, and so on, from more than 60.000 patients.Keywords: dynamic analysis, long short-term memory, prediction, sepsis
Procedia PDF Downloads 1251904 Development of a Multi-User Country Specific Food Composition Table for Malawi
Authors: Averalda van Graan, Joelaine Chetty, Malory Links, Agness Mwangwela, Sitilitha Masangwi, Dalitso Chimwala, Shiban Ghosh, Elizabeth Marino-Costello
Abstract:
Food composition data is becoming increasingly important as dealing with food insecurity and malnutrition in its persistent form of under-nutrition is now coupled with increasing over-nutrition and its related ailments in the developing world, of which Malawi is not spared. In the absence of a food composition database (FCDB) inherent to our dietary patterns, efforts were made to develop a country-specific FCDB for nutrition practice, research, and programming. The main objective was to develop a multi-user, country-specific food composition database, and table from existing published and unpublished scientific literature. A multi-phased approach guided by the project framework was employed. Phase 1 comprised a scoping mission to assess the nutrition landscape for compilation activities. Phase 2 involved training of a compiler and data collection from various sources, primarily; institutional libraries, online databases, and food industry nutrient data. Phase 3 subsumed evaluation and compilation of data using FAO and IN FOODS standards and guidelines. Phase 4 concluded the process with quality assurance. 316 Malawian food items categorized into eight food groups for 42 components were captured. The majority were from the baby food group (27%), followed by a staple (22%) and animal (22%) food group. Fats and oils consisted the least number of food items (2%), followed by fruits (6%). Proximate values are well represented; however, the percent missing data is huge for some components, including Se 68%, I 75%, Vitamin A 42%, and lipid profile; saturated fat 53%, mono-saturated fat 59%, poly-saturated fat 59% and cholesterol 56%. A multi-phased approach following the project framework led to the development of the first Malawian FCDB and table. The table reflects inherent Malawian dietary patterns and nutritional concerns. The FCDB can be used by various professionals in nutrition and health. Rising over-nutrition, NCD, and changing diets challenge us for nutrient profiles of processed foods and complete lipid profiles.Keywords: analytical data, dietary pattern, food composition data, multi-phased approach
Procedia PDF Downloads 931903 Microalgae Hydrothermal Liquefaction Process Optimization and Comprehension to Produce High Quality Biofuel
Authors: Lucie Matricon, Anne Roubaud, Geert Haarlemmer, Christophe Geantet
Abstract:
Introduction: This case discusses the management of two floor of mouth (FOM) Squamous Cell Carcinomas (SCC) not identified upon initial biopsy. Case Report: A 51 year-old male presented with right FOM erythroleukoplakia. Relevant medical history included alcoholic dependence syndrome and alcoholic liver disease. Relevant drug therapy encompassed acamprosate, folic acid, hydroxocobalamin and thiamine. The patient had a 55.5 pack-year smoking history and alcohol dependence from age 14, drinking 16 units/day. FOM incisional biopsy and histopathological analysis diagnosed Carcinoma in situ. Treatment involved wide local excision. Specimen analysis revealed two separate foci of pT1 moderately differentiated SCCs. Carcinoma staging scans revealed no pathological lymphadenopathy, no local invasion or metastasis. SCCs had been excised in completion with narrow margins. MDT discussion concluded that in view of the field changes it would be difficult to identify specific areas needing further excision, although techniques such as Lugol’s Iodine were considered. Further surgical resection, surgical neck management and sentinel lymph node biopsy was offered. The patient declined intervention, primary management involved close monitoring alongside alcohol and smoking cessation referral. Discussion: Narrow excisional margins can increase carcinoma recurrence risk. Biopsy failed to identify SCCs, despite sampling an area of clinical concern. For gross field change multiple incisional biopsies should be considered to increase chance of accurate diagnosis and appropriate treatment. Coupling of tobacco and alcohol has a synergistic effect, exponentially increasing the relative risk of oral carcinoma development. Tobacco and alcoholic control is fundamental in reducing treatment‑related side effects, recurrence risk, and second primary cancer development.Keywords: microalgae, biofuels, hydrothermal liquefaction, biomass
Procedia PDF Downloads 1331902 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data
Authors: Huinan Zhang, Wenjie Jiang
Abstract:
Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.Keywords: Artificial intelligence, deep learning, data mining, remote sensing
Procedia PDF Downloads 631901 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 1071900 Knowledge, Attitude and Practice on Swimming Pool Hygiene and Assessment of Microbial Contamination in Educational Institution in Selangor
Authors: Zarini Ismail, Mas Ayu Arina Mohd Anuwar, Ling Chai Ying, Tengku Zetty Maztura Tengku Jamaluddin, Nurul Azmawati Mohamed, Nadeeya Ayn Umaisara Mohamad Nor
Abstract:
The transmission of infectious diseases can occur anywhere, including in the swimming pools. A large number of swimmers turnover and poor hygienic behaviours will increase the occurrence of direct and indirect water contamination. A wide variety of infections such as the gastrointestinal illnesses, skin rash, eye infections, ear infections and respiratory illnesses had been reported following the exposure to the contaminated water. Understanding the importance of pool hygiene with a healthy practice will reduce the risk of infection. The aims of the study are to investigate the knowledge, attitude and practices on pool hygiene among swimming pool users and to determine the microbial contaminants in swimming pools. A cross-sectional study was conducted using self-administered questionnaires to 600 swimming pool users from four swimming pools belong to the three educational institutions in Selangor. Data was analyzed using SPSS Statistics version 22.0 for Windows. The knowledge, attitude and practice of the study participants were analyzed using the sum score based on Bloom’s cut-off point (80%). Having a score above the cut-off point was classified as having high levels of knowledge, positive attitude and good practice. The association between socio-demographic characteristics, knowledge and attitude with practice on pool hygiene was determined by Chi-Square test. The physicochemical parameters and the microbial contamination were determined using a standard method for examination of waste and wastewater. Of the 600 respondents, 465 (77.5%) were females with the mean age of 21 years old. Most of the respondents are the students (98.8%) which belong to the three educational institutions in Selangor. Overall, the majority of the respondents (89.2%) had low knowledge on pool hygiene, but had positive attitudes (91.3%). Whereas only half of the respondents (50%) practice good hygiene while using the swimming pools. There was a significant association between practice level on pool hygiene with knowledge (p < 0.001) and also the attitude (p < 0.001). The measurements of the physicochemical parameters showed that all 4 swimming pools had low levels of pH and two had low levels of free chlorine. However, all the water samples tested were negative for Escherichia coli. The findings of this study suggested that high knowledge and positive attitude towards pool hygiene ensure a good practice among swimming pool users. Thus, it is recommended that educational interventions should be given to the swimming pool users to increase their knowledge regarding the pool hygiene and this will prevent the unnecessary outbreak of infectious diseases related to swimming pool.Keywords: attitude, knowledge, pool hygiene, practice
Procedia PDF Downloads 2981899 A Pilot Randomized Controlled Trial of a Physical Activity Intervention in a Low Socioeconomic Population: Focus on Mental Contrasting with Implementation Intentions
Authors: Shaun G. Abbott, Rebecca C. Reynolds, John B. F. de Wit
Abstract:
Low physical activity (PA) levels are a major public health concern in Australia. There is some evidence that PA interventions can increase PA levels via various methods, including online delivery. Low Socioeconomic Status (SES) people participate in less PA than the rest of the population, partly due to poor self-regulation behaviors associated with socioeconomic characteristics. Interventions that involve a particular method of self-regulation, Mental Contrasting with Implementation Intentions (MCII), has regularly achieved healthy behavior change, but few studies focus on PA behavior outcomes and no studies examining the effect of MCII on the PA behaviors of low SES people has been done. In this study, a pilot randomized controlled trial (RCT) will deliver MCII for PA behavior change to individuals of relative disadvantage for the first time. The current pilot study will predict sample size for a future full RCT and test the hypothesis that sedentary participants from areas of relative socioeconomic disadvantage of Sydney, who learn the MCII technique will be more physically active, have improved anthropometry and psychological indicators at the completion of a 12-week intervention compared to baseline and control. Eligible participants of relative socioeconomic disadvantage will be randomly assigned to either the ‘PA Information Plus MCII Intervention Group’ or a ‘PA Information-Only Control Group’. Both groups will attend a baseline and 12-week face-to-face consultation; where PA, anthropometric and psychological data will be gathered. The intervention group will be guided through an MCII session at the baseline appointment to establish a PA goal to aim to achieve over 12 weeks. Other than these baseline and 12-week consultations, all participant interaction will occur online. All participants will receive a ‘Fitbit’ accelerometer to record objectively. PA as a daily step count, along with a PA diary for the duration of the study. PA data will be recorded on a personalized online spreadsheet. Both groups will receive a standard PA information email at weeks 2, 4, and 8. The intervention group will also receive scripted follow-up online appointments to discuss goal progress. The current pilot study is in recruitment stage with findings to be presented at the conference in December if selected.Keywords: implementation intentions, mental contrasting, motivation, pedometer, physical activity, socioeconomic
Procedia PDF Downloads 3061898 The Role of Dialogue in Shared Leadership and Team Innovative Behavior Relationship
Authors: Ander Pomposo
Abstract:
Purpose: The aim of this study was to investigate the impact that dialogue has on the relationship between shared leadership and innovative behavior and the importance of dialogue in innovation. This study wants to contribute to the literature by providing theorists and researchers a better understanding of how to move forward in the studies of moderator variables in the relationship between shared leadership and team outcomes such as innovation. Methodology: A systematic review of the literature, originally adopted from the medical sciences but also used in management and leadership studies, was conducted to synthesize research in a systematic, transparent and reproducible manner. A final sample of 48 empirical studies was scientifically synthesized. Findings: Shared leadership gives a better solution to team management challenges and goes beyond the classical, hierarchical, or vertical leadership models based on the individual leader approach. One of the outcomes that emerge from shared leadership is team innovative behavior. To intensify the relationship between shared leadership and team innovative behavior, and understand when is more effective, the moderating effects of other variables in this relationship should be examined. This synthesis of the empirical studies revealed that dialogue is a moderator variable that has an impact on the relationship between shared leadership and team innovative behavior when leadership is understood as a relational process. Dialogue is an activity between at least two speech partners trying to fulfill a collective goal and is a way of living open to people and ideas through interaction. Dialogue is productive when team members engage relationally with one another. When this happens, participants are more likely to take responsibility for the tasks they are involved and for the relationships they have with others. In this relational engagement, participants are likely to establish high-quality connections with a high degree of generativity. This study suggests that organizations should facilitate the dialogue of team members in shared leadership which has a positive impact on innovation and offers a more adaptive framework for the leadership that is needed in teams working in complex work tasks. These results uncover the necessity of more research on the role that dialogue plays in contributing to important organizational outcomes such as innovation. Case studies describing both best practices and obstacles of dialogue in team innovative behavior are necessary to gain a more detailed insight into the field. It will be interesting to see how all these fields of research evolve and are implemented in dialogue practices in the organizations that use team-based structures to deal with uncertainty, fast-changing environments, globalization and increasingly complex work.Keywords: dialogue, innovation, leadership, shared leadership, team innovative behavior
Procedia PDF Downloads 1821897 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach
Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi
Abstract:
Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,
Procedia PDF Downloads 2671896 Using Lean-Six Sigma Philosophy to Enhance Revenues and Improve Customer Satisfaction: Case Studies from Leading Telecommunications Service Providers in India
Authors: Senthil Kumar Anantharaman
Abstract:
Providing telecommunications based network services in developing countries like India which has a population of 1.5 billion people, so that these services reach every individual, is one of the greatest challenges the country has been facing in its journey towards economic growth and development. With growing number of telecommunications service providers in the country, a constant challenge that has been faced by these providers is in providing not only quality but also delightful customer experience while simultaneously generating enhanced revenues and profits. Thus, the role played by process improvement methodologies like Six Sigma cannot be undermined and specifically in telecom service provider based operations, it has provided substantial benefits. Therefore, it advantages are quite comparable to its applications and advantages in other sectors like manufacturing, financial services, information technology-based services and Healthcare services. One of the key reasons that this methodology has been able to reap great benefits in telecommunications sector is that this methodology has been combined with many of its competing process improvement techniques like Theory of Constraints, Lean and Kaizen to give the maximum benefit to the service providers thereby creating a winning combination of organized process improvement methods for operational excellence thereby leading to business excellence. This paper discusses about some of the key projects and areas in the end to end ‘Quote to Cash’ process at big three Indian telecommunication companies that have been highly assisted by applying Six Sigma along with other process improvement techniques. While the telecommunication companies which we have considered, is primarily in India and run by both private operators and government based setups, the methodology can be applied equally well in any other part of developing countries around the world having similar context. This study also compares the enhanced revenues that can arise out of appropriate opportunities in emerging market scenarios, that Six Sigma as a philosophy and methodology can provide if applied with vigour and robustness. Finally, the paper also comes out with a winning framework in combining Six Sigma methodology with Kaizen, Lean and Theory of Constraints that will enhance both the top-line as well as the bottom-line while providing the customers a delightful experience.Keywords: emerging markets, lean, process improvement, six sigma, telecommunications, theory of constraints
Procedia PDF Downloads 1641895 Influence of Farnesol on Growth and Development of Dysdercus koenigii
Authors: Shailendra Kumar, Kamal Kumar Gupta
Abstract:
Dysdercus koenigii is an economically important pest of cotton worldwide. The pest damages the crop by sucking sap, staining lint, reducing the oil content of the seeds and deteriorating the quality of cotton. Plant possesses a plethora of secondary metabolites which are used as defense mechanism against herbivores. One of the important categories of such chemicals is insect growth regulators and the intermediates in their biosynthesis. Farnesol belongs to sesquiterpenoid. It is an intermediate in Juvenile hormone biosynthetic pathway in insects has been widely reported in the variety of plants. This chemical can disrupt the normal metabolic function and therefore, affects various life processes of the insects. Present study tested the efficacy of farnesol against Dysdercus koenigii. 2μl of 5% (100µg) and 10% (200µg) of the farnesol was applied topically on the dorsum of thoracic region of the newly emerged fifth instar nymphs of Dysdercus. The treated insects were observed daily for their survival, weight gain, and developmental anomalies for a period of ten days. The results indicated that treatment with 200µg farnesol decreased survival of the insects to 70% after 24h of exposure. At lower doses, no significant decrease in the survival was observed. However, the surviving nymphs showed alteration in growth, development, and metamorphosis. The weight gain in the treated nymphs showed deviation from control. The treated nymphs showed an increase in mortality during subsequent days and increase in the nymphal duration. The number of nymphs undergoing metamorphosis decreased to 46% and 88% in the treatments with the dose of 200µg and 100µg respectively. Severe developmental anomalies were also observed in the treated nymphs. The treated nymphs moulted into supernumerary nymphs, adultoids, adults with exuviae attached and adults with wing deformities. On treatment with 200µg; 26% adultoid, 4% adults with exuviae attached and 12% adults with wing deformed were produced. Treatment with 100µg resulted in production of 34% adultoid, 26% adults with deformed wing and 4% adults with exuviae attached. Many of the treated nymphs did not metamorphose into adults, remained in nymphal stage and died. Our results indicated potential application plant-derived secondary metabolites like farnesol in the management of Dysdercus population.Keywords: development, Dysdercus koenigii, farnesol, survival
Procedia PDF Downloads 3551894 Comparison between Experimental and Numerical Studies of Fully Encased Composite Columns
Authors: Md. Soebur Rahman, Mahbuba Begum, Raquib Ahsan
Abstract:
Composite column is a structural member that uses a combination of structural steel shapes, pipes or tubes with or without reinforcing steel bars and reinforced concrete to provide adequate load carrying capacity to sustain either axial compressive loads alone or a combination of axial loads and bending moments. Composite construction takes the advantages of the speed of construction, light weight and strength of steel, and the higher mass, stiffness, damping properties and economy of reinforced concrete. The most usual types of composite columns are the concrete filled steel tubes and the partially or fully encased steel profiles. Fully encased composite column (FEC) provides compressive strength, stability, stiffness, improved fire proofing and better corrosion protection. This paper reports experimental and numerical investigations of the behaviour of concrete encased steel composite columns subjected to short-term axial load. In this study, eleven short FEC columns with square shaped cross section were constructed and tested to examine the load-deflection behavior. The main variables in the test were considered as concrete compressive strength, cross sectional size and percentage of structural steel. A nonlinear 3-D finite element (FE) model has been developed to analyse the inelastic behaviour of steel, concrete, and longitudinal reinforcement as well as the effect of concrete confinement of the FEC columns. FE models have been validated against the current experimental study conduct in the laboratory and published experimental results under concentric load. It has been observed that FE model is able to predict the experimental behaviour of FEC columns under concentric gravity loads with good accuracy. Good agreement has been achieved between the complete experimental and the numerical load-deflection behaviour in this study. The capacities of each constituent of FEC columns such as structural steel, concrete and rebar's were also determined from the numerical study. Concrete is observed to provide around 57% of the total axial capacity of the column whereas the steel I-sections contributes to the rest of the capacity as well as ductility of the overall system. The nonlinear FE model developed in this study is also used to explore the effect of concrete strength and percentage of structural steel on the behaviour of FEC columns under concentric loads. The axial capacity of FEC columns has been found to increase significantly by increasing the strength of concrete.Keywords: composite, columns, experimental, finite element, fully encased, strength
Procedia PDF Downloads 290