Search results for: friction identification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3674

Search results for: friction identification

164 Blended Learning in a Mathematics Classroom: A Focus in Khan Academy

Authors: Sibawu Witness Siyepu

Abstract:

This study explores the effects of instructional design using blended learning in the learning of radian measures among Engineering students. Blended learning is an education programme that combines online digital media with traditional classroom methods. It requires the physical presence of both lecturer and student in a mathematics computer laboratory. Blended learning provides element of class control over time, place, path or pace. The focus was on the use of Khan Academy to supplement traditional classroom interactions. Khan Academy is a non-profit educational organisation created by educator Salman Khan with a goal of creating an accessible place for students to learn through watching videos in a computer assisted computer. The researcher who is an also lecturer in mathematics support programme collected data through instructing students to watch Khan Academy videos on radian measures, and by supplying students with traditional classroom activities. Classroom activities entails radian measure activities extracted from the Internet. Students were given an opportunity to engage in class discussions, social interactions and collaborations. These activities necessitated students to write formative assessments tests. The purpose of formative assessments tests was to find out about the students’ understanding of radian measures, including errors and misconceptions they displayed in their calculations. Identification of errors and misconceptions serve as pointers of students’ weaknesses and strengths in their learning of radian measures. At the end of data collection, semi-structure interviews were administered to a purposefully sampled group to explore their perceptions and feedback regarding the use of blended learning approach in teaching and learning of radian measures. The study employed Algebraic Insight Framework to analyse data collected. Algebraic Insight Framework is a subset of symbol sense which allows a student to correctly enter expressions into a computer assisted systems efficiently. This study offers students opportunities to enter topics and subtopics on radian measures into a computer through the lens of Khan Academy. Khan academy demonstrates procedures followed to reach solutions of mathematical problems. The researcher performed the task of explaining mathematical concepts and facilitated the process of reinvention of rules and formulae in the learning of radian measures. Lastly, activities that reinforce students’ understanding of radian were distributed. Results showed that this study enthused the students in their learning of radian measures. Learning through videos prompted the students to ask questions which brought about clarity and sense making to the classroom discussions. Data revealed that sense making through reinvention of rules and formulae assisted the students in enhancing their learning of radian measures. This study recommends the use of Khan Academy in blended learning to be introduced as a socialisation programme to all first year students. This will prepare students that are computer illiterate to become conversant with the use of Khan Academy as a powerful tool in the learning of mathematics. Khan Academy is a key technological tool that is pivotal for the development of students’ autonomy in the learning of mathematics and that promotes collaboration with lecturers and peers.

Keywords: algebraic insight framework, blended learning, Khan Academy, radian measures

Procedia PDF Downloads 310
163 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 274
162 Organizational Resilience in the Perspective of Supply Chain Risk Management: A Scholarly Network Analysis

Authors: William Ho, Agus Wicaksana

Abstract:

Anecdotal evidence in the last decade shows that the occurrence of disruptive events and uncertainties in the supply chain is increasing. The coupling of these events with the nature of an increasingly complex and interdependent business environment leads to devastating impacts that quickly propagate within and across organizations. For example, the recent COVID-19 pandemic increased the global supply chain disruption frequency by at least 20% in 2020 and is projected to have an accumulative cost of $13.8 trillion by 2024. This crisis raises attention to organizational resilience to weather business uncertainty. However, the concept has been criticized for being vague and lacking a consistent definition, thus reducing the significance of the concept for practice and research. This study is intended to solve that issue by providing a comprehensive review of the conceptualization, measurement, and antecedents of operational resilience that have been discussed in the supply chain risk management literature (SCRM). We performed a Scholarly Network Analysis, combining citation-based and text-based approaches, on 252 articles published from 2000 to 2021 in top-tier journals based on three parameters: AJG ranking and ABS ranking, UT Dallas and FT50 list, and editorial board review. We utilized a hybrid scholarly network analysis by combining citation-based and text-based approaches to understand the conceptualization, measurement, and antecedents of operational resilience in the SCRM literature. Specifically, we employed a Bibliographic Coupling Analysis in the research cluster formation stage and a Co-words Analysis in the research cluster interpretation and analysis stage. Our analysis reveals three major research clusters of resilience research in the SCRM literature, namely (1) supply chain network design and optimization, (2) organizational capabilities, and (3) digital technologies. We portray the research process in the last two decades in terms of the exemplar studies, problems studied, commonly used approaches and theories, and solutions provided in each cluster. We then provide a conceptual framework on the conceptualization and antecedents of resilience based on studies in these clusters and highlight potential areas that need to be studied further. Finally, we leverage the concept of abnormal operating performance to propose a new measurement strategy for resilience. This measurement overcomes the limitation of most current measurements that are event-dependent and focus on the resistance or recovery stage - without capturing the growth stage. In conclusion, this study provides a robust literature review through a scholarly network analysis that increases the completeness and accuracy of research cluster identification and analysis to understand conceptualization, antecedents, and measurement of resilience. It also enables us to perform a comprehensive review of resilience research in SCRM literature by including research articles published during the pandemic and connects this development with a plethora of articles published in the last two decades. From the managerial perspective, this study provides practitioners with clarity on the conceptualization and critical success factors of firm resilience from the SCRM perspective.

Keywords: supply chain risk management, organizational resilience, scholarly network analysis, systematic literature review

Procedia PDF Downloads 74
161 Gas Chromatographic: Mass Spectroscopic Analysis of Citrus reticulata Fruit Peel, Zingiber officinale Rhizome, and Sesamum indicum Seed Ethanolic Extracts Possessing Antioxidant Activity and Lipid Profile Effects

Authors: Samar Saadeldin Abdelmotalab Omer, Ikram Mohamed Eltayeb Elsiddig, Saad Mohammed Hussein Ayoub

Abstract:

A variety of herbal medicinal plants are known to confer beneficial effects in regards to modification of cardiovascular ri’=sk factors. The anti-hypercholesterolaemic and antioxidant activities of the crude ethanolic extracts of Citrus reticulate fruit peel, Zingiber officinale rhizome and Sesamum indicum seed extracts have been demonstrated. These plants are assumed to possess biologically active principles, which impart their pharmacologic activities. GC-MS analysis of the ethanolic extracts was carried out to identify the active principles and their percentages of occurrence in the analytes. Analysis of the extracts was carried out using (GS-MS QP) type Schimadzu 2010 equipped with a capillary column RTX-50 (restec), (length 30mm, diameter 0.25mm, and thickness 0.25mm). Helium was used as a carrier gas, the temperature was programmed at 200°C for 5 minutes at a rate of 15ml/minute, and the extracts were injected using split injection mode. The identification of different components was achieved from their Mass Spectra and Retention time, compared with those in the NIST library. The results revealed the presence of 80 compounds in Sudanese locally grown C. reticulata fruit peel extract, most of which were monoterpenoid compounds including Limonene (3.03%), Alpha & Gamma - terpinenes (2.61%), Linalool (1.38%), Citral (1.72%) which are known to have profound antioxidant effects. The Sesquiterpenoids Humulene (0.26%) and Caryophyllene (1.97%) were also identified, the latter known to have profound anti-anxiety and anti-depressant activity in addition to the beneficiary effects in lipid regulation. The analysis of the locally grown S. indicum oily and water soluble portions of seed extract revealed the presence of a total of 64 compounds with considerably high percentage of the mono-unsaturated fatty acid ester methyl oleate (66.99%) in addition to methyl stearate (9.35%) and palmitate (15.71%) of oil portion, whereas, plant sterols including Gamma-sitosterol (13.5%), fucosterol (2.11%) and stigmasterol (1.95%) in addition to gamma-tocopherol (1.16%) were detected in extract water-soluble portion. The latter indicate various principles known to have valuable pharmacological benefits including antioxidant activities and beneficiary effects on intestinal cholesterol absorption and regulation of serum cholesterol levels. Z. officinale rhizome extract analysis revealed the presence of 93 compounds, the most abundant were alpha-zingeberine (16.5%), gingerol (9.25%), alpha-sesquiphellandrene (8.3%), zingerone (6.78%), beta-bisabolene (4.19%), alpha-farnesene (3.56%), ar-curcumene (3.29%), gamma-elemene (1.25%) and a variety of other compounds. The presence of these active principles reflected on the activity of the extract. Activity could be assigned to a single or a combination of two or more extract components. GC-MS analysis concluded the occurrence of compounds known to possess antioxidant activity and lipid profile effects.

Keywords: gas chromatography, indicum, officinale, reticulata

Procedia PDF Downloads 373
160 Visual Representation of Ancient Chinese Rites with Digitalization Technology: A Case of Confucius Worship Ceremony

Authors: Jihong Liang, Huiling Feng, Linqing Ma, Tianjiao Qi

Abstract:

Confucius is the first sage in Chinese culture. Confucianism, the theories represented by Confucius, has long been at the core of Chinese traditional society, as the dominating political ideology of centralized feudal monarchy for more than two thousand years. Confucius Worship Ceremony held in the Confucian Temple in Qufu (Confucius’s birthplace), which is dedicated to commemorate Confucius and other 170 elites in Confucianism with a whole set of formal rites, pertains to “Auspicious Rites”, which worship heaven and earth, humans and ghosts. It was first a medium-scaled ritual activity but then upgraded to the supreme one at national level in the Qing Dynasty. As a national event, it was celebrated by Emperor as well as common intellectuals in traditional China. The Ceremony can be solemn and respectful, with prescribed and complicated procedures, well-prepared utensil and matched offerings operated in rhythm with music and dances. Each participant has his place, and everyone follows the specified rules. This magnificent ritual Ceremony, while embedded with rich culture connotation, actually symbolizes the social acknowledgment for orthodox culture represented by Confucianism. Rites reflected in this Ceremony, is one of the most important features of Chinese culture, serving as the key bond in the identification and continuation of Chinese culture. These rites and ritual ceremonies, as culture memories themselves, are not only treasures of China, but of the whole world. However, while the ancient Chinese Rite has been one of the thorniest and most complicated topics for academics, the more regrettable is that due to their interruption in practice and historical changes, these rites and ritual ceremonies have already become a vague language in today’s academic discourse and strange terms of the past for common people. Luckily, we, today, by virtue of modern digital technology, may be able to reproduce these ritual ceremonies, as most of them can still be found in ancient manuscripts, through which Chinese ancestors tell the beauty and gravity of their dignified rites and more importantly, their spiritual pursuits with vivid language and lively pictures. This research, based on review and interpretation of the ancient literature, intends to construct the ancient ritual ceremonies, with the Confucius Worship Ceremony as a case and by use of digital technology. Using 3D technology, the spatial scenes in the Confucian Temple can be reconstructed by virtual reality; the memorial tablet exhibited in the temple by GIS and different rites in the ceremonies by animation technology. With reference to the lyrics, melodies and lively pictures recorded in ancient scripts, it is also possible to reproduce the live dancing site. Also, image rendering technology can help to show the life experience and accomplishments of Confucius. Finally, lining up all the elements in a multimedia narrative form, a complete digitalized Confucius Worship Ceremony can be reproduced, which will provide an excellent virtual experience that goes beyond time and space by bringing its audience back to that specific historical time. This digital project, once completed, will play an important role in the inheritance and dissemination of cultural heritage.

Keywords: Confucius worship ceremony, multimedia narrative form, GIS, visual representation

Procedia PDF Downloads 262
159 Predicting Suicidal Behavior by an Accurate Monitoring of RNA Editing Biomarkers in Blood Samples

Authors: Berengere Vire, Nicolas Salvetat, Yoann Lannay, Guillaume Marcellin, Siem Van Der Laan, Franck Molina, Dinah Weissmann

Abstract:

Predicting suicidal behaviors is one of the most complex challenges of daily psychiatric practices. Today, suicide risk prediction using biological tools is not validated and is only based on subjective clinical reports of the at-risk individual. Therefore, there is a great need to identify biomarkers that would allow early identification of individuals at risk of suicide. Alterations of adenosine-to-inosine (A-to-I) RNA editing of neurotransmitter receptors and other proteins have been shown to be involved in etiology of different psychiatric disorders and linked to suicidal behavior. RNA editing is a co- or post-transcriptional process leading to a site-specific alteration in RNA sequences. It plays an important role in the epi transcriptomic regulation of RNA metabolism. On postmortem human brain tissue (prefrontal cortex) of depressed suicide victims, Alcediag found specific alterations of RNA editing activity on the mRNA coding for the serotonin 2C receptor (5-HT2cR). Additionally, an increase in expression levels of ADARs, the RNA editing enzymes, and modifications of RNA editing profiles of prime targets, such as phosphodiesterase 8A (PDE8A) mRNA, have also been observed. Interestingly, the PDE8A gene is located on chromosome 15q25.3, a genomic region that has recurrently been associated with the early-onset major depressive disorder (MDD). In the current study, we examined whether modifications in RNA editing profile of prime targets allow identifying disease-relevant blood biomarkers and evaluating suicide risk in patients. To address this question, we performed a clinical study to identify an RNA editing signature in blood of depressed patients with and without the history of suicide attempts. Patient’s samples were drawn in PAXgene tubes and analyzed on Alcediag’s proprietary RNA editing platform using next generation sequencing technology. In addition, gene expression analysis by quantitative PCR was performed. We generated a multivariate algorithm comprising various selected biomarkers to detect patients with a high risk to attempt suicide. We evaluated the diagnostic performance using the relative proportion of PDE8A mRNA editing at different sites and/or isoforms as well as the expression of PDE8A and the ADARs. The significance of these biomarkers for suicidality was evaluated using the area under the receiver-operating characteristic curve (AUC). The generated algorithm comprising the biomarkers was found to have strong diagnostic performances with high specificity and sensitivity. In conclusion, we developed tools to measure disease-specific biomarkers in blood samples of patients for identifying individuals at the greatest risk for future suicide attempts. This technology not only fosters patient management but is also suitable to predict the risk of drug-induced psychiatric side effects such as iatrogenic increase of suicidal ideas/behaviors.

Keywords: blood biomarker, next-generation-sequencing, RNA editing, suicide

Procedia PDF Downloads 259
158 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 162
157 Non-Invasive Evaluation of Patients After Percutaneous Coronary Revascularization. The Role of Cardiac Imaging

Authors: Abdou Elhendy

Abstract:

Numerous study have shown the efficacy of the percutaneous intervention (PCI) and coronary stenting in improving left ventricular function and relieving exertional angina. Furthermore, PCI remains the main line of therapy in acute myocardial infarction. Improvement of procedural techniques and new devices have resulted in an increased number of PCI in those with difficult and extensive lesions, multivessel disease as well as total occlusion. Immediate and late outcome may be compromised by acute thrombosis or the development of fibro-intimal hyperplasia. In addition, progression of coronary artery disease proximal or distal to the stent as well as in non-stented arteries is not uncommon. As a result, complications can occur, such as acute myocardial infarction, worsened heart failure or recurrence of angina. In a stent, restenosis can occur without symptoms or with atypical complaints rendering the clinical diagnosis difficult. Routine invasive angiography is not appropriate as a follow up tool due to associated risk and cost and the limited functional assessment. Exercise and pharmacologic stress testing are increasingly used to evaluate the myocardial function, perfusion and adequacy of revascularization. Information obtained by these techniques provide important clues regarding presence and severity of compromise in myocardial blood flow. Stress echocardiography can be performed in conjunction with exercise or dobutamine infusion. The diagnostic accuracy has been moderate, but the results provide excellent prognostic stratification. Adding myocardial contrast agents can improve imaging quality and allows assessment of both function and perfusion. Stress radionuclide myocardial perfusion imaging is an alternative to evaluate these patients. The extent and severity of wall motion and perfusion abnormalities observed during exercise or pharmacologic stress are predictors of survival and risk of cardiac events. According to current guidelines, stress echocardiography and radionuclide imaging are considered to have appropriate indication among patients after PCI who have cardiac symptoms and those who underwent incomplete revascularization. Stress testing is not recommended in asymptomatic patients, particularly early after revascularization, Coronary CT angiography is increasingly used and provides high sensitive for the diagnosis of coronary artery stenosis. Average sensitivity and specificity for the diagnosis of in stent stenosis in pooled data are 79% and 81%, respectively. Limitations include blooming artifacts and low feasibility in patients with small stents or thick struts. Anatomical and functional cardiac imaging modalities are corner stone for the assessment of patients after PCI and provide salient diagnostic and prognostic information. Current imaging techniques cans serve as gate keeper for coronary angiography, thus limiting the risk of invasive procedures to those who are likely to benefit from subsequent revascularization. The determination of which modality to apply requires careful identification of merits and limitation of each technique as well as the unique characteristic of each individual patient.

Keywords: coronary artery disease, stress testing, cardiac imaging, restenosis

Procedia PDF Downloads 168
156 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry

Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard

Abstract:

Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.

Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor

Procedia PDF Downloads 327
155 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 131
154 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 350
153 'iTheory': Mobile Way to Music Fundamentals

Authors: Marina Karaseva

Abstract:

The beginning of our century became a new digital epoch in the educational situation. Last decade the newest stage of this process had been initialized by the touch-screen mobile devices with program applications for them. The touch possibilities for learning fundamentals of music are of especially importance for music majors. The phenomenon of touching, firstly, makes it realistic to play on the screen as on music instrument, secondly, helps students to learn music theory while listening in its sound elements by music ear. Nowadays we can detect several levels of such mobile applications: from the basic ones devoting to the elementary music training such as intervals and chords recognition, to the more advanced applications which deal with music perception of non-major and minor modes, ethnic timbres, and complicated rhythms. The main purpose of the proposed paper is to disclose the main tendencies in this process and to demonstrate the most innovative features of music theory applications on the base of iOS and Android systems as the most common used. Methodological recommendations how to use these digital material musicologically will be done for the professional music education of different levels. These recommendations are based on more than ten year ‘iTheory’ teaching experience of the author. In this paper, we try to logically classify all types of ‘iTheory’mobile applications into several groups, according to their methodological goals. General concepts given below will be demonstrated in concrete examples. The most numerous group of programs is formed with simulators for studying notes with audio-visual links. There are link-pair types as follows: sound — musical notation which may be used as flashcards for studying words and letters, sound — key, sound — string (basically, guitar’s). The second large group of programs is programs-tests containing a game component. As a rule, their basis is made with exercises on ear identification and reconstruction by voice: sounds and intervals on their sounding — harmonical and melodical, music modes, rhythmic patterns, chords, selected instrumental timbres. Some programs are aimed at an establishment of acoustical communications between concepts of the musical theory and their musical embodiments. There are also programs focused on progress of operative musical memory (with repeating of sounding phrases and their transposing in a new pitch), as well as on perfect pitch training In addition a number of programs improvisation skills have been developed. An absolute pitch-system of solmisation is a common base for mobile programs. However, it is possible to find also the programs focused on the relative pitch system of solfegе. In App Store and Google Play Market online store there are also many free programs-simulators of musical instruments — piano, guitars, celesta, violin, organ. These programs may be effective for individual and group exercises in ear training or composition classes. Great variety and good sound quality of these programs give now a unique opportunity to musicians to master their music abilities in a shorter time. That is why such teaching material may be a way to effective study of music theory.

Keywords: ear training, innovation in music education, music theory, mobile devices

Procedia PDF Downloads 205
152 Designing Disaster Resilience Research in Partnership with an Indigenous Community

Authors: Suzanne Phibbs, Christine Kenney, Robyn Richardson

Abstract:

The Sendai Framework for Disaster Risk Reduction called for the inclusion of indigenous people in the design and implementation of all hazard policies, plans, and standards. Ensuring that indigenous knowledge practices were included alongside scientific knowledge about disaster risk was also a key priority. Indigenous communities have specific knowledge about climate and natural hazard risk that has been developed over an extended period of time. However, research within indigenous communities can be fraught with issues such as power imbalances between the researcher and researched, the privileging of researcher agendas over community aspirations, as well as appropriation and/or inappropriate use of indigenous knowledge. This paper documents the process of working alongside a Māori community to develop a successful community-led research project. Research Design: This case study documents the development of a qualitative community-led participatory project. The community research project utilizes a kaupapa Māori research methodology which draws upon Māori research principles and concepts in order to generate knowledge about Māori resilience. The research addresses a significant gap in the disaster research literature relating to indigenous knowledge about collective hazard mitigation practices as well as resilience in rurally isolated indigenous communities. The research was designed in partnership with the Ngāti Raukawa Northern Marae Collective as well as Ngā Wairiki Ngāti Apa (a group of Māori sub-tribes who are located in the same region) and will be conducted by Māori researchers utilizing Māori values and cultural practices. The research project aims and objectives, for example, are based on themes that were identified as important to the Māori community research partners. The research methodology and methods were also negotiated with and approved by the community. Kaumātua (Māori elders) provided cultural and ethical guidance over the proposed research process and will continue to provide oversight over the conduct of the research. Purposive participant recruitment will be facilitated with support from local Māori community research partners, utilizing collective marae networks and snowballing methods. It is envisaged that Māori participants’ knowledge, experiences and views will be explored using face-to-face communication research methods such as workshops, focus groups and/or semi-structured interviews. Interviews or focus groups may be held in English and/or Te Reo (Māori language) to enhance knowledge capture. Analysis, knowledge dissemination, and co-authorship of publications will be negotiated with the Māori community research partners. Māori knowledge shared during the research will constitute participants’ intellectual property. New knowledge, theory, frameworks, and practices developed by the research will be co-owned by Māori, the researchers, and the host academic institution. Conclusion: An emphasis on indigenous knowledge systems within the Sendai Framework for Disaster Risk Reduction risks the appropriation and misuse of indigenous experiences of disaster risk identification, mitigation, and response. The research protocol underpinning this project provides an exemplar of collaborative partnership in the development and implementation of an indigenous project that has relevance to policymakers, academic researchers, other regions with indigenous communities and/or local disaster risk reduction knowledge practices.

Keywords: community resilience, indigenous disaster risk reduction, Maori, research methods

Procedia PDF Downloads 125
151 Evaluation of Antibiotic Resistance and Extended-Spectrum β-Lactamases Production Rates of Gram Negative Rods in a University Research and Practice Hospital, 2012-2015

Authors: Recep Kesli, Cengiz Demir, Onur Turkyilmaz, Hayriye Tokay

Abstract:

Objective: Gram-negative rods are a large group of bacteria, and include many families, genera, and species. Most clinical isolates belong to the family Enterobacteriaceae. Resistance due to the production of extended-spectrum β-lactamases (ESBLs) is a difficulty in the handling of Enterobacteriaceae infections, but other mechanisms of resistance are also emerging, leading to multidrug resistance and threatening to create panresistant species. We aimed in this study to evaluate resistance rates of Gram-negative rods bacteria isolated from clinical specimens in Microbiology Laboratory, Afyon Kocatepe University, ANS Research and Practice Hospital, between October 2012 and September 2015. Methods: The Gram-negative rods strains were identified by conventional methods and VITEK 2 automated identification system (bio-Mérieux, Marcy l’etoile, France). Antibiotic resistance tests were performed by both the Kirby-Bauer disk-diffusion and automated Antimicrobial Susceptibility Testing (AST, bio-Mérieux, Marcy l’etoile, France) methods. Disk diffusion results were evaluated according to the standards of Clinical and Laboratory Standards Institute (CLSI). Results: Of the totally isolated 1.701 Enterobacteriaceae strains 1434 (84,3%) were Klebsiella pneumoniae, 171 (10%) were Enterobacter spp., 96 (5.6%) were Proteus spp., and 639 Nonfermenting gram negatives, 477 (74.6%) were identified as Pseudomonas aeruginosa, 135 (21.1%) were Acinetobacter baumannii and 27 (4.3%) were Stenotrophomonas maltophilia. The ESBL positivity rate of the totally studied Enterobacteriaceae group were 30.4%. Antibiotic resistance rates for Klebsiella pneumoniae were as follows: amikacin 30.4%, gentamicin 40.1%, ampicillin-sulbactam 64.5%, cefepime 56.7%, cefoxitin 35.3%, ceftazidime 66.8%, ciprofloxacin 65.2%, ertapenem 22.8%, imipenem 20.5%, meropenem 20.5 %, and trimethoprim-sulfamethoxazole 50.1%, and for 114 Enterobacter spp were detected as; amikacin 26.3%, gentamicin 31.5%, cefepime 26.3%, ceftazidime 61.4%, ciprofloxacin 8.7%, ertapenem 8.7%, imipenem 12.2%, meropenem 12.2%, and trimethoprim-sulfamethoxazole 19.2 %. Resistance rates for Proteus spp. were: 24,3% meropenem, 26.2% imipenem, 20.2% amikacin 10.5% cefepim, 33.3% ciprofloxacin and levofloxacine, 31.6% ceftazidime, 20% ceftriaxone, 15.2% gentamicin, 26.6% amoxicillin-clavulanate, and 26.2% trimethoprim-sulfamethoxale. Resistance rates of P. aeruginosa was found as follows: Amikacin 32%, gentamicin 42 %, imipenem 43%, merpenem 43%, ciprofloxacin 50%, levofloxacin 52%, cefepim 38%, ceftazidim 63%, piperacillin/tacobactam 85%, for Acinetobacter baumannii; Amikacin 53.3%, gentamicin 56.6 %, imipenem 83%, merpenem 86%, ciprofloxacin 100%, ceftazidim 100%, piperacillin/tacobactam 85 %, colisitn 0 %, and for S. malthophilia; levofloxacin 66.6 % and trimethoprim/sulfamethoxozole 0 %. Conclusions: This study showed that resistance in Gram-negative rods was a serious clinical problem in our hospital and suggested the need to perform typification of the isolated bacteria with susceptibility testing regularly in the routine laboratory procedures. This application guided to empirical antibiotic treatment choices truly, as a consequence of the reality that each hospital shows different resistance profiles.

Keywords: antibiotic resistance, gram negative rods, ESBL, VITEK 2

Procedia PDF Downloads 331
150 An Epidemiological Study on Cutaneous Melanoma, Basocellular and Epidermoid Carcinomas Diagnosed in a Sunny City in Southeast Brazil in a Five-Year Period

Authors: Carolina L. Cerdeira, Julia V. F. Cortes, Maria E. V. Amarante, Gersika B. Santos

Abstract:

Skin cancer is the most common cancer in several parts of the world; in a tropical country like Brazil, the situation isn’t different. The Brazilian population is exposed to high levels of solar radiation, increasing the risk of developing cutaneous carcinoma. Aimed at encouraging prevention measures and the early diagnosis of these tumors, a study was carried out that analyzed data on cutaneous melanomas, basal cell, and epidermoid carcinomas, using as primary data source the medical records of 161 patients registered in one pathology service, which performs skin biopsies in a city of Minas Gerais, Brazil. All patients diagnosed with skin cancer at this service from January 2015 to December 2019 were included. The incidence of skin carcinoma cases was correlated with the identification of histological type, sex, age group, and topographic location. Correlation between variables was verified by Fisher's exact test at a nominal significance level of 5%, with statistical analysis performed by R® software. A significant association was observed between age group and type of cancer (p=0.0085); age group and sex (0.0298); and type of cancer and body region affected (p < 0.01). Those 161 cases analyzed comprised 93 basal cell carcinomas, 66 epidermoid carcinomas, and only two cutaneous melanomas. In the group aged 19 to 30 years, the epidermoid form was most prevalent; from 31 to 45 and from 46 to 59 years, the basal cell prevailed; in 60-year-olds or over, both types had higher frequencies. Associating age group and sex, in groups aged 18 to 30 and 46 to 59 years, women were most affected. In the 31-to 45-year-old group, men predominated. There was a gender balance in the age group 60-year-olds or over. As for topography, there was a high prevalence in the head and neck, followed by upper limbs. Relating histological type and topography, there was a prevalence of basal cell and epidermoid carcinomas in the head and neck. In the chest, the basal cell form was most prevalent; in upper limbs, the epidermoid form prevailed. Cutaneous melanoma affected only the chest and upper limbs. About 82% of patients 60-year-olds or over had head and neck cancer; from 46 to 59 and 60-year-olds or over, the head and neck region and upper limbs were predominantly affected; the distribution was balanced in the 31-to 45-year-old group. In conclusion, basal cell carcinoma was predominant, whereas cutaneous melanoma was the rarest among the types analyzed. Patients 60-year-olds or over were most affected, showing gender balance. In young adults, there was a prevalence of the epidermoid form; in middle-aged patients, basal cell carcinoma was predominant; in the elderly, both forms presented with higher frequencies. There was a higher incidence of head and neck cancers, followed by malignancies affecting the upper limbs. The epidermoid type manifested significantly in the upper limbs. Body regions such as the thorax and lower limbs were less affected, which is justified by the lower exposure of these areas to incident solar radiation.

Keywords: basal cell carcinoma, cutaneous melanoma, skin cancer, squamous cell carcinoma, topographic location

Procedia PDF Downloads 129
149 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems

Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo

Abstract:

Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.

Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic

Procedia PDF Downloads 138
148 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 24
147 Effectiveness of Simulation Resuscitation Training to Improve Self-Efficacy of Physicians and Nurses at Aga Khan University Hospital in Advanced Cardiac Life Support Courses Quasi-Experimental Study Design

Authors: Salima R. Rajwani, Tazeen Ali, Rubina Barolia, Yasmin Parpio, Nasreen Alwani, Salima B. Virani

Abstract:

Introduction: Nurses and physicians have a critical role in initiating lifesaving interventions during cardiac arrest. It is important that timely delivery of high quality Cardio Pulmonary Resuscitation (CPR) with advanced resuscitation skills and management of cardiac arrhythmias is a key dimension of code during cardiac arrest. It will decrease the chances of patient survival if the healthcare professionals are unable to initiate CPR timely. Moreover, traditional training will not prepare physicians and nurses at a competent level and their knowledge level declines over a period of time. In this regard, simulation training has been proven to be effective in promoting resuscitation skills. Simulation teaching learning strategy improves knowledge level, and skills performance during resuscitation through experiential learning without compromising patient safety in real clinical situations. The purpose of the study is to evaluate the effectiveness of simulation training in Advanced Cardiac Life Support Courses by using the selfefficacy tool. Methods: The study design is a quantitative research design and non-randomized quasi-experimental study design. The study examined the effectiveness of simulation through self-efficacy in two instructional methods; one is Medium Fidelity Simulation (MFS) and second is Traditional Training Method (TTM). The sample size was 220. Data was compiled by using the SPSS tool. The standardized simulation based training increases self-efficacy, knowledge, and skills and improves the management of patients in actual resuscitation. Results: 153 students participated in study; CG: n = 77 and EG: n = 77. The comparison was done between arms in pre and post-test. (F value was 1.69, p value is <0.195 and df was 1). There was no significant difference between arms in the pre and post-test. The interaction between arms was observed and there was no significant difference in interaction between arms in the pre and post-test. (F value was 0.298, p value is <0.586 and df is 1. However, the results showed self-efficacy scores were significantly higher within experimental group in post-test in advanced cardiac life support resuscitation courses as compared to Traditional Training Method (TTM) and had overall (p <0.0001) and F value was 143.316 (mean score was 45.01 and SD was 9.29) verses pre-test result showed (mean score was 31.15 and SD was 12.76) as compared to TTM in post-test (mean score was 29.68 and SD was 14.12) verses pre-test result showed (mean score was 42.33 and SD was 11.39). Conclusion: The standardized simulation-based training was conducted in the safe learning environment in Advanced Cardiac Life Suport Courses and physicians and nurses benefited from self-confidence, early identification of life-threatening scenarios, early initiation of CPR, and provides high-quality CPR, timely administration of medication and defibrillation, appropriate airway management, rhythm analysis and interpretation, and Return of Spontaneous Circulation (ROSC), team dynamics, debriefing, and teaching and learning strategies that will improve the patient survival in actual resuscitation.

Keywords: advanced cardiac life support, cardio pulmonary resuscitation, return of spontaneous circulation, simulation

Procedia PDF Downloads 80
146 Assessment of Potential Chemical Exposure to Betamethasone Valerate and Clobetasol Propionate in Pharmaceutical Manufacturing Laboratories

Authors: Nadeen Felemban, Hamsa Banjer, Rabaah Jaafari

Abstract:

One of the most common hazards in the pharmaceutical industry is the chemical hazard, which can cause harm or develop occupational health diseases/illnesses due to chronic exposures to hazardous substances. Therefore, a chemical agent management system is required, including hazard identification, risk assessment, controls for specific hazards and inspections, to keep your workplace healthy and safe. However, routine management monitoring is also required to verify the effectiveness of the control measures. Moreover, Betamethasone Valerate and Clobetasol Propionate are some of the APIs (Active Pharmaceutical Ingredients) with highly hazardous classification-Occupational Hazard Category (OHC 4), which requires a full containment (ECA-D) during handling to avoid chemical exposure. According to Safety Data Sheet, those chemicals are reproductive toxicants (reprotoxicant H360D), which may affect female workers’ health and cause fatal damage to an unborn child, or impair fertility. In this study, qualitative (chemical Risk assessment-qCRA) was conducted to assess the chemical exposure during handling of Betamethasone Valerate and Clobetasol Propionate in pharmaceutical laboratories. The outcomes of qCRA identified that there is a risk of potential chemical exposure (risk rating 8 Amber risk). Therefore, immediate actions were taken to ensure interim controls (according to the Hierarchy of controls) are in place and in use to minimize the risk of chemical exposure. No open handlings should be done out of the Steroid Glove Box Isolator (SGB) with the required Personal Protective Equipment (PPEs). The PPEs include coverall, nitrile hand gloves, safety shoes and powered air-purifying respirators (PAPR). Furthermore, a quantitative assessment (personal air sampling) was conducted to verify the effectiveness of the engineering controls (SGB Isolator) and to confirm if there is chemical exposure, as indicated earlier by qCRA. Three personal air samples were collected using an air sampling pump and filter (IOM2 filters, 25mm glass fiber media). The collected samples were analyzed by HPLC in the BV lab, and the measured concentrations were reported in (ug/m3) with reference to Occupation Exposure Limits, 8hr OELs (8hr TWA) for each analytic. The analytical results are needed in 8hr TWA (8hr Time-weighted Average) to be analyzed using Bayesian statistics (IHDataAnalyst). The results of the Bayesian Likelihood Graph indicate (category 0), which means Exposures are de "minimus," trivial, or non-existent Employees have little to no exposure. Also, these results indicate that the 3 samplings are representative samplings with very low variations (SD=0.0014). In conclusion, the engineering controls were effective in protecting the operators from such exposure. However, routine chemical monitoring is required every 3 years unless there is a change in the processor type of chemicals. Also, frequent management monitoring (daily, weekly, and monthly) is required to ensure the control measures are in place and in use. Furthermore, a Similar Exposure Group (SEG) was identified in this activity and included in the annual health surveillance for health monitoring.

Keywords: occupational health and safety, risk assessment, chemical exposure, hierarchy of control, reproductive

Procedia PDF Downloads 173
145 Development of a Conceptual Framework for Supply Chain Management Strategies Maximizing Resilience in Volatile Business Environments: A Case of Ventilator Challenge UK

Authors: Elena Selezneva

Abstract:

Over the last two decades, an unprecedented growth in uncertainty and volatility in all aspects of the business environment has caused major global supply chain disruptions and malfunctions. The effects of one failed company in a supply chain can ripple up and down the chain, causing a number of entities or an entire supply chain to collapse. The complicating factor is that an increasingly unstable and unpredictable business environment fuels the growing complexity of global supply chain networks. That makes supply chain operations extremely unpredictable and hard to manage with the established methods and strategies. It has caused the premature demise of many companies around the globe as they could not withstand or adapt to the storm of change. Solutions to this problem are not easy to come by. There is a lack of new empirically tested theories and practically viable supply chain resilience strategies. The mainstream organizational approach to managing supply chain resilience is rooted in well-established theories developed in the 1960-1980s. However, their effectiveness is questionable in currently extremely volatile business environments. The systems thinking approach offers an alternative view of supply chain resilience. Still, it is very much in the development stage. The aim of this explorative research is to investigate supply chain management strategies that are successful in taming complexity in volatile business environments and creating resilience in supply chains. The design of this research methodology was guided by an interpretivist paradigm. A literature review informed the selection of the systems thinking approach to supply chain resilience. Therefore, an explorative single case study of Ventilator Challenge UK was selected as a case study for its extremely resilient performance of its supply chain during a period of national crisis. Ventilator Challenge UK is intensive care ventilators supply project for the NHS. It ran for 3.5 months and finished in 2020. The participants moved on with their lives, and most of them are not employed by the same organizations anymore. Therefore, the study data includes documents, historical interviews, live interviews with participants, and social media postings. The data analysis was accomplished in two stages. First, data were thematically analyzed. In the second stage, pattern matching and pattern identification were used to identify themes that formed the findings of the research. The findings from the Ventilator Challenge UK case study supply management practices demonstrated all the features of an adaptive dynamic system. They cover all the elements of supply chain and employ an entire arsenal of adaptive dynamic system strategies enabling supply chain resilience. Also, it is not a simple sum of parts and strategies. Bonding elements and connections between the components of a supply chain and its environment enabled the amplification of resilience in the form of systemic emergence. Enablers are categorized into three subsystems: supply chain central strategy, supply chain operations, and supply chain communications. Together, these subsystems and their interconnections form the resilient supply chain system framework conceptualized by the author.

Keywords: enablers of supply chain resilience, supply chain resilience strategies, systemic approach in supply chain management, resilient supply chain system framework, ventilator challenge UK

Procedia PDF Downloads 81
144 Incorporating Spatial Transcriptome Data into Ligand-Receptor Analyses to Discover Regional Activation in Cells

Authors: Eric Bang

Abstract:

Interactions between receptors and ligands are crucial for many essential biological processes, including neurotransmission and metabolism. Ligand-receptor analyses that examine cell behavior and interactions often utilize cell type-specific RNA expressions from single-cell RNA sequencing (scRNA-seq) data. Using CellPhoneDB, a public repository consisting of ligands, receptors, and ligand-receptor interactions, the cell-cell interactions were explored in a specific scRNA-seq dataset from kidney tissue and portrayed the results with dot plots and heat maps. Depending on the type of cell, each ligand-receptor pair was aligned with the interacting cell type and calculated the positori probabilities of these associations, with corresponding P values reflecting average expression values between the triads and their significance. Using single-cell data (sample kidney cell references), genes in the dataset were cross-referenced with ones in the existing CellPhoneDB dataset. For example, a gene such as Pleiotrophin (PTN) present in the single-cell data also needed to be present in the CellPhoneDB dataset. Using the single-cell transcriptomics data via slide-seq and reference data, the CellPhoneDB program defines cell types and plots them in different formats, with the two main ones being dot plots and heat map plots. The dot plot displays derived measures of the cell to cell interaction scores and p values. For the dot plot, each row shows a ligand-receptor pair, and each column shows the two interacting cell types. CellPhoneDB defines interactions and interaction levels from the gene expression level, so since the p-value is on a -log10 scale, the larger dots represent more significant interactions. By performing an interaction analysis, a significant interaction was discovered for myeloid and T-cell ligand-receptor pairs, including those between Secreted Phosphoprotein 1 (SPP1) and Fibronectin 1 (FN1), which is consistent with previous findings. It was proposed that an effective protocol would involve a filtration step where cell types would be filtered out, depending on which ligand-receptor pair is activated in that part of the tissue, as well as the incorporation of the CellPhoneDB data in a streamlined workflow pipeline. The filtration step would be in the form of a Python script that expedites the manual process necessary for dataset filtration. Being in Python allows it to be integrated with the CellPhoneDB dataset for future workflow analysis. The manual process involves filtering cell types based on what ligand/receptor pair is activated in kidney cells. One limitation of this would be the fact that some pairings are activated in multiple cells at a time, so the manual manipulation of the data is reflected prior to analysis. Using the filtration script, accurate sorting is incorporated into the CellPhoneDB database rather than waiting until the output is produced and then subsequently applying spatial data. It was envisioned that this would reveal wherein the cell various ligands and receptors are interacting with different cell types, allowing for easier identification of which cells are being impacted and why, for the purpose of disease treatment. The hope is this new computational method utilizing spatially explicit ligand-receptor association data can be used to uncover previously unknown specific interactions within kidney tissue.

Keywords: bioinformatics, Ligands, kidney tissue, receptors, spatial transcriptome

Procedia PDF Downloads 139
143 Improved Approach to the Treatment of Resistant Breast Cancer

Authors: Lola T. Alimkhodjaeva, Lola T. Zakirova, Soniya S. Ziyavidenova

Abstract:

Background: Breast cancer (BC) is still one of the urgent oncology problems. The essential obstacle to the full anti-tumor therapy implementation is drug resistance development. Taking into account the fact that chemotherapy is main antitumor treatment in BC patients, the important task is to improve treatment results. Certain success in overcoming this situation has been associated with the use of methods of extracorporeal blood treatment (ECBT), plasmapheresis. Materials and Methods: We examined 129 women with resistant BC stages 3-4, aged between 56 to 62 years who had previously received 2 courses of CAF chemotherapy. All patients additionally underwent 2 courses of CAF chemotherapy but against the background ECBT with ultrasonic exposure. We studied the following parameters: 1. The highlights of peripheral blood before and after therapy. 2. The state of cellular immunity and identification of activation markers CD23 +, CD25 +, CD38 +, CD95 + on lymphocytes was performed using monoclonal antibodies. Evaluation of humoral immunity was determined by the level of main classes of immunoglobulins IgG, IgA, IgM in serum. 3. The degree of tumor regression was assessed by WHO recommended 4 gradations. (complete - 100%, partial - more than 50% of initial size, process stabilization–regression is less than 50% of initial size and tumor advance progressing). 4. Medical pathomorphism in the tumor was determined by Lavnikova. 5. The study of immediate and remote results, up to 3 years and more. Results and Discussion: After performing extracorporeal blood treatment anemia occurred in 38.9%, leukopenia in 36.8%, thrombocytopenia in 34.6%, hypolymphemia in 26.8%. Studies of immunoglobulin fractions in blood serum were able to establish a certain relationship between the classes of immunoglobulin A, G, M and their functions. The results showed that after treatment the values of main immunoglobulins in patients’ serum approximated to normal. Analysis of expression of activation markers CD25 + cells bearing receptors for IL-2 (IL-2Rα chain) and CD95 + lymphocytes that were mediated physiological apoptosis showed the tendency to increase, which apparently was due to activation of cellular immunity cytokines allocated by ultrasonic treatment. To carry out ECBT on the background of ultrasonic treatment improved the parameters of the immune system, which were expressed in stimulation of cellular immunity and correcting imbalances in humoral immunity. The key indicator of conducted treatment efficiency is the immediate result measured by the degree of tumor regression. After ECBT performance the complete regression was 10.3%, partial response - 55.5%, process stabilization - 34.5%, tumor advance progressing no observed. Morphological investigations of tumor determined therapeutic pathomorphism grade 2 in 15%, in 25% - grade 3 and therapeutic pathomorphism grade 4 in 60% of patients. One of the main criteria for the effect of conducted treatment is to study the remission terms in the postoperative period (up to 3 years or more). The remission terms up to 3 years with ECBT was 34.5%, 5-year survival was 54%. Carried out research suggests that a comprehensive study of immunological and clinical course of breast cancer allows the differentiated approach to the choice of methods for effective treatment.

Keywords: breast cancer, immunoglobulins, extracorporeal blood treatment, chemotherapy

Procedia PDF Downloads 275
142 Aerofloral Studies and Allergenicity Potentials of Dominant Atmospheric Pollen Types at Some Locations in Northwestern Nigeria

Authors: Olugbenga S. Alebiosu, Olusola H. Adekanmbi, Oluwatoyin T. Ogundipe

Abstract:

Pollen and spores have been identified as major airborne bio-particles inducing respiratory disorders such as asthma, allergic rhinitis and atopic dermatitis among hypersensitive individuals. An aeropalynological study was conducted within a one year sampling period with a view to investigating the monthly depositional rate of atmospheric pollen and spores; influence of the immediate vegetation on airborne pollen distribution; allergenic potentials of dominant atmospheric pollen types at selected study locations in Bauchi and Taraba states, Northwestern Nigeria. A tauber-like pollen trap was employed in aerosampling with the sampler positioned at a height of 5 feet above the ground, followed by a monthly collection of the recipient solution for the sampling period. The collected samples were subjected to acetolysis treatment, examined microscopically with the identification of pollen grains and spores using reference materials and published photomicrographs. Plants within the surrounding vegetation were enumerated. Crude protein contents extracted from pollen types found to be commonly dominant at both study locations; Senna siamea, Terminalia cattapa, Panicum maximum and Zea mays were used to sensitize Musmusculus. Histopathological studies of bronchi and lung sections from certain dead M.musculus in the test groups was conducted. Blood samples were collected from the pre-orbital vein of M.musculus and processed for serological and haematological (differential and total white blood cell counts) studies. ELISA was used in determining the levels of serological parameters: IgE and cytokines (TNF-, IL-5, and IL-13). Statistical significance was observed in the correlation between the levels of serological and haematological parameters elicited by each test group, differences between the levels of serological and haematological parameters elicited by each test group and those of the control, as well as at varying sensitization periods. The results from this study revealed dominant airborne pollen types across the study locations; Syzygiumguineense, Tridaxprocumbens, Elaeisguineensis, Mimosa sp., Borreria sp., Terminalia sp., Senna sp. and Poaceae. Nephrolepis sp., Pteris sp. and a trilete fern also produced spores. This study also revealed that some of the airborne pollen types were produced by local plants at the study locations. Bronchi sections of M.musculus after first and second sensitizations, as well as lung section after first sensitization with Senna siamea, showed areas of necrosis. Statistical significance was recorded in the correlation between the levels of some serological and haematological parameters produced by each test group and those of the control, as well as at certain sensitization periods. The study revealed some candidate pollen allergens at the study locations allergy sufferers and also established a complexity of interaction between immune cells, IgE and cytokines at varied periods of mice sensitization and forming a paradigm of human immune response to different pollen allergens. However, it is expedient that further studies should be conducted on these candidate pollen allergens for their allergenicity potential in humans within their immediate environment.

Keywords: airborne, hypersensitive, mus musculus, pollen allergens, respiratory, tauber-like

Procedia PDF Downloads 134
141 Rapid, Automated Characterization of Microplastics Using Laser Direct Infrared Imaging and Spectroscopy

Authors: Andreas Kerstan, Darren Robey, Wesam Alvan, David Troiani

Abstract:

Over the last 3.5 years, Quantum Cascade Lasers (QCL) technology has become increasingly important in infrared (IR) microscopy. The advantages over fourier transform infrared (FTIR) are that large areas of a few square centimeters can be measured in minutes and that the light intensive QCL makes it possible to obtain spectra with excellent S/N, even with just one scan. A firmly established solution of the laser direct infrared imaging (LDIR) 8700 is the analysis of microplastics. The presence of microplastics in the environment, drinking water, and food chains is gaining significant public interest. To study their presence, rapid and reliable characterization of microplastic particles is essential. Significant technical hurdles in microplastic analysis stem from the sheer number of particles to be analyzed in each sample. Total particle counts of several thousand are common in environmental samples, while well-treated bottled drinking water may contain relatively few. While visual microscopy has been used extensively, it is prone to operator error and bias and is limited to particles larger than 300 µm. As a result, vibrational spectroscopic techniques such as Raman and FTIR microscopy have become more popular, however, they are time-consuming. There is a demand for rapid and highly automated techniques to measure particle count size and provide high-quality polymer identification. Analysis directly on the filter that often forms the last stage in sample preparation is highly desirable as, by removing a sample preparation step it can both improve laboratory efficiency and decrease opportunities for error. Recent advances in infrared micro-spectroscopy combining a QCL with scanning optics have created a new paradigm, LDIR. It offers improved speed of analysis as well as high levels of automation. Its mode of operation, however, requires an IR reflective background, and this has, to date, limited the ability to perform direct “on-filter” analysis. This study explores the potential to combine the filter with an infrared reflective surface filter. By combining an IR reflective material or coating on a filter membrane with advanced image analysis and detection algorithms, it is demonstrated that such filters can indeed be used in this way. Vibrational spectroscopic techniques play a vital role in the investigation and understanding of microplastics in the environment and food chain. While vibrational spectroscopy is widely deployed, improvements and novel innovations in these techniques that can increase the speed of analysis and ease of use can provide pathways to higher testing rates and, hence, improved understanding of the impacts of microplastics in the environment. Due to its capability to measure large areas in minutes, its speed, degree of automation and excellent S/N, the LDIR could also implemented for various other samples like food adulteration, coatings, laminates, fabrics, textiles and tissues. This presentation will highlight a few of them and focus on the benefits of the LDIR vs classical techniques.

Keywords: QCL, automation, microplastics, tissues, infrared, speed

Procedia PDF Downloads 66
140 The Impact of a Simulated Teaching Intervention on Preservice Teachers’ Sense of Professional Identity

Authors: Jade V. Rushby, Tony Loughland, Tracy L. Durksen, Hoa Nguyen, Robert M. Klassen

Abstract:

This paper reports a study investigating the development and implementation of an online multi-session ‘scenario-based learning’ (SBL) program administered to preservice teachers in Australia. The transition from initial teacher education to the teaching profession can present numerous cognitive and psychological challenges for early career teachers. Therefore, the identification of additional supports, such as scenario-based learning, that can supplement existing teacher education programs may help preservice teachers to feel more confident and prepared for the realities and complexities of teaching. Scenario-based learning is grounded in situated learning theory which holds that learning is most powerful when it is embedded within its authentic context. SBL exposes participants to complex and realistic workplace situations in a supportive environment and has been used extensively to help prepare students in other professions, such as legal and medical education. However, comparatively limited attention has been paid to investigating the effects of SBL in teacher education. In the present study, the SBL intervention provided participants with the opportunity to virtually engage with school-based scenarios, reflect on how they might respond to a series of plausible response options, and receive real-time feedback from experienced educators. The development process involved several stages, including collaboration with experienced educators to determine the scenario content based on ‘critical incidents’ they had encountered during their teaching careers, the establishment of the scoring key, the development of the expert feedback, and an extensive review process to refine the program content. The 4-part SBL program focused on areas that can be challenging in the beginning stages of a teaching career, including managing student behaviour and workload, differentiating the curriculum, and building relationships with colleagues, parents, and the community. Results from prior studies implemented by the research group using a similar 4-part format have shown a statistically significant increase in preservice teachers’ self-efficacy and classroom readiness from the pre-test to the final post-test. In the current research, professional teaching identity - incorporating self-efficacy, motivation, self-image, satisfaction, and commitment to teaching - was measured over six weeks at multiple time points: before, during, and after the 4-part scenario-based learning program. Analyses included latent growth curve modelling to assess the trajectory of change in the outcome variables throughout the intervention. The paper outlines (1) the theoretical underpinnings of SBL, (2) the development of the SBL program and methodology, and (3) the results from the study, including the impact of the SBL program on aspects of participating preservice teachers’ professional identity. The study shows how SBL interventions can be implemented alongside the initial teacher education curriculum to help prepare preservice teachers for the transition from student to teacher.

Keywords: classroom simulations, e-learning, initial teacher education, preservice teachers, professional learning, professional teaching identity, scenario-based learning, teacher development

Procedia PDF Downloads 71
139 Screening for Women with Chorioamnionitis: An Integrative Literature Review

Authors: Allison Herlene Du Plessis, Dalena (R.M.) Van Rooyen, Wilma Ten Ham-Baloyi, Sihaam Jardien-Baboo

Abstract:

Introduction: Women die in pregnancy and childbirth for five main reasons—severe bleeding, infections, unsafe abortions, hypertensive disorders (pre-eclampsia and eclampsia), and medical complications including cardiac disease, diabetes, or HIV/AIDS complicated by pregnancy. In 2015, WHO classified sepsis as the third highest cause for maternal mortalities in the world. Chorioamnionitis is a clinical syndrome of intrauterine infection during any stage of the pregnancy and it refers to ascending bacteria from the vaginal canal up into the uterus, causing infection. While the incidence rates for chorioamnionitis are not well documented, complications related to chorioamnionitis are well documented and midwives still struggle to identify this condition in time due to its complex nature. Few diagnostic methods are available in public health services, due to escalated laboratory costs. Often the affordable biomarkers, such as C-reactive protein CRP, full blood count (FBC) and WBC, have low significance in diagnosing chorioamnionitis. A lack of screening impacts on effective and timeous management of chorioamnionitis, and early identification and management of risks could help to prevent neonatal complications and reduce the subsequent series of morbidities and healthcare costs of infants who are health foci of perinatal infections. Objective: This integrative literature review provides an overview of current best research evidence on the screening of women at risk for chorioamnionitis. Design: An integrative literature review was conducted using a systematic electronic literature search through EBSCOhost, Cochrane Online, Wiley Online, PubMed, Scopus and Google. Guidelines, research studies, and reports in English related to chorioamnionitis from 2008 up until 2020 were included in the study. Findings: After critical appraisal, 31 articles were included. More than one third (67%) of the literature included ranked on the three highest levels of evidence (Level I, II and III). Data extracted regarding screening for chorioamnionitis was synthesized into four themes, namely: screening by clinical signs and symptoms, screening by causative factors of chorioamnionitis, screening of obstetric history, and essential biomarkers to diagnose chorioamnionitis. Key conclusions: There are factors that can be used by midwives to identify women at risk for chorioamnionitis. However, there are a paucity of established sociological, epidemiological and behavioral factors to screen this population. Several biomarkers are available to diagnose chorioamnionitis. Increased Interleukin-6 in amniotic fluid is the better indicator and strongest predictor of histological chorioamnionitis, whereas the available rapid matrix-metalloproteinase-8 test requires further testing. Maternal white blood cells count (WBC) has shown poor selectivity and sensitivity, and C-reactive protein (CRP) thresholds varied among studies and are not ideal for conclusive diagnosis of subclinical chorioamnionitis. Implications for practice: Screening of women at risk for chorioamnionitis by health care providers providing care for pregnant women, including midwives, is important for diagnosis and management before complications arise, particularly in resource-constraint settings.

Keywords: chorioamnionitis, guidelines, best evidence, screening, diagnosis, pregnant women

Procedia PDF Downloads 123
138 Targeting Peptide Based Therapeutics: Integrated Computational and Experimental Studies of Autophagic Regulation in Host-Parasite Interaction

Authors: Vrushali Guhe, Shailza Singh

Abstract:

Cutaneous leishmaniasis is neglected tropical disease present worldwide caused by the protozoan parasite Leishmania major, the therapeutic armamentarium for leishmaniasis are showing several limitations as drugs are showing toxic effects with increasing resistance by a parasite. Thus identification of novel therapeutic targets is of paramount importance. Previous studies have shown that autophagy, a cellular process, can either facilitate infection or aid in the elimination of the parasite, depending on the specific parasite species and host background in leishmaniasis. In the present study, our objective was to target the essential autophagy protein ATG8, which plays a crucial role in the survival, infection dynamics, and differentiation of the Leishmania parasite. ATG8 in Leishmania major and its homologue, LC3, in Homo sapiens, act as autophagic markers. Present study manifested the crucial role of ATG8 protein as a potential target for combating Leishmania major infection. Through bioinformatics analysis, we identified non-conserved motifs within the ATG8 protein of Leishmania major, which are not present in LC3 of Homo sapiens. Against these two non-conserved motifs, we generated a peptide library of 60 peptides on the basis of physicochemical properties. These peptides underwent a filtering process based on various parameters, including feasibility of synthesis and purification, compatibility with Selective Reaction Monitoring (SRM)/Multiple reaction monitoring (MRM), hydrophobicity, hydropathy index, average molecular weight (Mw average), monoisotopic molecular weight (Mw monoisotopic), theoretical isoelectric point (pI), and half-life. Further filtering criterion shortlisted three peptides by using molecular docking and molecular dynamics simulations. The direct interaction between ATG8 and the shortlisted peptides was confirmed through Surface Plasmon Resonance (SPR) experiments. Notably, these peptides exhibited the remarkable ability to penetrate the parasite membrane and exert profound effects on Leishmania major. The treatment with these peptides significantly impacted parasite survival, leading to alterations in the cell cycle and morphology. Furthermore, the peptides were found to modulate autophagosome formation, particularly under starved conditions, suggesting their involvement in disrupting the regulation of autophagy within Leishmania major. In vitro, studies demonstrated that the selected peptides effectively reduced the parasite load within infected host cells. Encouragingly, these findings were corroborated by in vivo experiments, which showed a reduction in parasite burden upon peptide administration. Additionally, the peptides were observed to affect the levels of LC3II within host cells. In conclusion, our findings highlight the efficacy of these novel peptides in targeting Leishmania major’s ATG8 and disrupting parasite survival. These results provide valuable insights into the development of innovative therapeutic strategies against leishmaniasis via targeting autophagy protein ATG8 of Leishmania major.

Keywords: ATG8, leishmaniasis, surface plasmon resonance, MD simulation, molecular docking, peptide designing, therapeutics

Procedia PDF Downloads 83
137 Blood Chemo-Profiling in Workers Exposed to Occupational Pyrethroid Pesticides to Identify Associated Diseases

Authors: O. O. Sufyani, M. E. Oraiby, S. A. Qumaiy, A. I. Alaamri, Z. M. Eisa, A. M. Hakami, M. A. Attafi, O. M. Alhassan, W. M. Elsideeg, E. M. Noureldin, Y. A. Hobani, Y. Q. Majrabi, I. A. Khardali, A. B. Maashi, A. A. Al Mane, A. H. Hakami, I. M. Alkhyat, A. A. Sahly, I. M. Attafi

Abstract:

According to the Food and Agriculture Organization (FAO) Pesticides Use Database, pesticide use in agriculture in Saudi Arabia has more than doubled from 4539 tons in 2009 to 10496 tons in 2019. Among pesticides, pyrethroids is commonly used in Saudi Arabia. Pesticides may increase susceptibility to a variety of diseases, particularly among pesticide workers, due to their extensive use, indiscriminate use, and long-term exposure. Therefore, analyzing blood chemo-profiles and evaluating the detected substances as biomarkers for pyrethroid pesticide exposure may assist to identify and predicting adverse effects of exposure, which may be used for both preventative and risk assessment purposes. The purpose of this study was to (a) analyze chemo-profiling by Gas Chromatography-Mass Spectrometry (GC-MS) analysis, (b) identify the most commonly detected chemicals in a time-exposure-dependent manner using a Venn diagram, and (c) identify their associated disease among pesticide workers using analyzer tools on the Comparative Toxicogenomics Database (CTD) website, (250 healthy male volunteers (20-60 years old) who deal with pesticides in the Jazan region of Saudi Arabia (exposure intervals: 1-2, 4-6, 6-8, more than 8 years) were included in the study. A questionnaire was used to collect demographic information, the duration of pesticide exposure, and the existence of chronic conditions. Blood samples were collected for biochemistry analysis and extracted by solid-phase extraction for gas chromatography-mass spectrometry (GC-MS) analysis. Biochemistry analysis reveals no significant changes in response to the exposure period; however, an inverse association between the albumin level and the exposure interval was observed. The blood chemo-profiling was differentially expressed in an exposure time-dependent manner. This analysis identified the common chemical set associated with each group and their associated significant occupational diseases. While some of these chemicals are associated with a variety of diseases, the distinguishing feature of these chemically associated disorders is their applicability for prevention measures. The most interesting finding was the identification of several chemicals; erucic acid, pelargonic acid, alpha-linolenic acid, dibutyl phthalate, diisobutyl phthalate, dodecanol, myristic Acid, pyrene, and 8,11,14-eicosatrienoic acid, associated with pneumoconiosis, asbestosis, asthma, silicosis and berylliosis. Chemical-disease association study also found that cancer, digestive system disease, nervous system disease, and metabolic disease were the most often recognized disease categories in the common chemical set. The hierarchical clustering approach was used to compare the expression patterns and exposure intervals of the chemicals found commonly. More study is needed to validate these chemicals as early markers of pyrethroid insecticide-related occupational disease, which might assist evaluate and reducing risk. The current study contributes valuable data and recommendations to public health.

Keywords: occupational, toxicology, chemo-profiling, pesticide, pyrethroid, GC-MS

Procedia PDF Downloads 102
136 Novel Aspects of Merger Control Pertaining to Nascent Acquisition: An Analytical Legal Research

Authors: Bhargavi G. Iyer, Ojaswi Bhagat

Abstract:

It is often noted that the value of a novel idea lies in its successful implementation. However, successful implementation requires the nurturing and encouragement of innovation. Nascent competitors are a true representation of innovation in any given industry. A nascent competitor is an entity whose prospective innovation poses a future threat to an incumbent dominant competitor. While a nascent competitor benefits in several ways, it is also exposed significantly and is at greater risk of facing the brunt of exclusionary practises and abusive conduct by dominant incumbent competitors in the industry. This research paper aims to explore the risks and threats faced by nascent competitors and analyse the benefits they accrue as well as the advantages they proffer to the economy; through an analytical, critical study. In such competitive market environments, a rise of the acquisitions of nascent competitors by the incumbent dominants is observed. Therefore, this paper will examine the dynamics of nascent acquisition. Further, this paper hopes to specifically delve into the role of antitrust bodies in regulating nascent acquisition. This paper also aspires to deal with the question how to distinguish harmful from harmless acquisitions in order to facilitate ideal enforcement practice. This paper proposes mechanisms of scrutiny in order to ensure healthy market practises and efficient merger control in the context of nascent acquisitions. Taking into account the scope and nature of the topic, as well as the resources available and accessible, a combination of the methods of doctrinal research and analytical research were employed, utilising secondary sources in order to assess and analyse the subject of research. While legally evaluating the Killer Acquisition theory and the Nascent Potential Acquisition theory, this paper seeks to critically survey the precedents and instances of nascent acquisitions. In addition to affording a compendious account of the legislative framework and regulatory mechanisms in the United States, the United Kingdom, and the European Union; it hopes to suggest an internationally practicable legal foundation for domestic legislation and enforcement to adopt. This paper hopes to appreciate the complexities and uncertainties with respect to nascent acquisitions and attempts to suggest viable and plausible policy measures in antitrust law. It additionally attempts to examine the effects of such nascent acquisitions upon the consumer and the market economy. This paper weighs the argument of shifting the evidentiary burden on to the merging parties in order to improve merger control and regulation and expounds on its discovery of the strengths and weaknesses of the approach. It is posited that an effective combination of factual, legal, and economic analysis of both the acquired and acquiring companies possesses the potential to improve ex post and ex ante merger review outcomes involving nascent companies; thus, preventing anti-competitive practises. This paper concludes with an analysis of the possibility and feasibility of industry-specific identification of anti-competitive nascent acquisitions and implementation of measures accordingly.

Keywords: acquisition, antitrust law, exclusionary practises merger control, nascent competitor

Procedia PDF Downloads 161
135 Application and Aspects of Biometeorology in Inland Open Water Fisheries Management in the Context of Changing Climate: Status and Research Needs

Authors: U.K. Sarkar, G. Karnatak, P. Mishal, Lianthuamluaia, S. Kumari, S.K. Das, B.K. Das

Abstract:

Inland open water fisheries provide food, income, livelihood and nutritional security to millions of fishers across the globe. However, the open water ecosystem and fisheries are threatened due to climate change and anthropogenic pressures, which are more visible in the recent six decades, making the resources vulnerable. Understanding the interaction between meteorological parameters and inland fisheries is imperative to develop mitigation and adaptation strategies. As per IPCC 5th assessment report, the earth is warming at a faster rate in recent decades. Global mean surface temperature (GMST) for the decade 2006–2015 (0.87°C) was 6 times higher than the average over the 1850–1900 period. The direct and indirect impacts of climatic parameters on the ecology of fisheries ecosystem have a great bearing on fisheries due to alterations in fish physiology. The impact of meteorological factors on ecosystem health and fish food organisms brings about changes in fish diversity, assemblage, reproduction and natural recruitment. India’s average temperature has risen by around 0.7°C during 1901–2018. The studies show that the mean air temperature in the Ganga basin has increased in the range of 0.20 - 0.47 °C and annual rainfall decreased in the range of 257-580 mm during the last three decades. The studies clearly indicate visible impacts of climatic and environmental factors on inland open water fisheries. Besides, a significant reduction in-depth and area (37.20–57.68% reduction), diversity of natural indigenous fish fauna (ranging from 22.85 to 54%) in wetlands and progression of trophic state from mesotrophic to eutrophic were recorded. In this communication, different applications of biometeorology in inland fisheries management with special reference to the assessment of ecosystem and species vulnerability to climatic variability and change have been discussed. Further, the paper discusses the impact of climate anomaly and extreme climatic events on inland fisheries and emphasizes novel modeling approaches for understanding the impact of climatic and environmental factors on reproductive phenology for identification of climate-sensitive/resilient fish species for the adoption of climate-smart fisheries in the future. Adaptation and mitigation strategies to enhance fish production and the role of culture-based fisheries and enclosure culture in converting sequestered carbon into blue carbon have also been discussed. In general, the type and direction of influence of meteorological parameters on fish biology in open water fisheries ecosystems are not adequately understood. The optimum range of meteorological parameters for sustaining inland open water fisheries is yet to be established. Therefore, the application of biometeorology in inland fisheries offers ample scope for understanding the dynamics in changing climate, which would help to develop a database on such least, addressed research frontier area. This would further help to project fisheries scenarios in changing climate regimes and develop adaptation and mitigation strategies to cope up with adverse meteorological factors to sustain fisheries and to conserve aquatic ecosystem and biodiversity.

Keywords: biometeorology, inland fisheries, aquatic ecosystem, modeling, India

Procedia PDF Downloads 195