Search results for: vehicle classification
115 Understanding the Cause(S) of Social, Emotional and Behavioural Difficulties of Adolescents with ADHD and Its Implications for the Successful Implementation of Intervention(S)
Authors: Elisavet Kechagia
Abstract:
Due to the interplay of different genetic and environmental risk factors and its heterogeneous nature, the concept of attention deficit hyperactivity disorder (ADHD) has shaped controversy and conflicts, which have been, in turn, reflected in the controversial arguments about its treatment. Taking into account recent well evidence-based researches suggesting that ADHD is a condition, in which biopsychosocial factors are all weaved together, the current paper explores the multiple risk-factors that are likely to influence ADHD, with a particular focus on adolescents with ADHD who might experience comorbid social, emotional and behavioural disorders (SEBD). In the first section of this paper, the primary objective was to investigate the conflicting ideas regarding the definition, diagnosis and treatment of ADHD at an international level as well as to critically examine and identify the limitations of the two most prevailing sets of diagnostic criteria that inform current diagnosis, the American Psychiatric Association’s (APA) diagnostic scheme, DSM-V, and the World Health Organisation’s (WHO) classification of diseases, ICD-10. Taking into consideration the findings of current longitudinal studies on ADHD association with high rates of comorbid conditions and social dysfunction, in the second section the author moves towards an investigation of the transitional points −physical, psychological and social ones− that students with ADHD might experience during early adolescence, as informed by neuroscience and developmental contextualism theory. The third section is an exploration of the different perspectives of ADHD as reflected in individuals’ with ADHD self-reports and the KENT project’s findings on school staff’s attitudes and practices. In the last section, given the high rates of SEBDs in adolescents with ADHD, it is examined how cognitive behavioural therapy (CBT), coupled with other interventions, could be effective in ameliorating anti-social behaviours and/or other emotional and behavioral difficulties of students with ADHD. The findings of a range of randomised control studies indicate that CBT might have positive outcomes in adolescents with multiple behavioural problems, hence it is suggested to be considered both in schools and other community settings. Finally, taking into account the heterogeneous nature of ADHD, the different biopsychosocial and environmental risk factors that take place during adolescence and the discourse and practices concerning ADHD and SEBD, it is suggested how it might be possible to make sense of and meaningful improvements to the education of adolescents with ADHD within a multi-modal and multi-disciplinary whole-school approach that addresses the multiple problems that not only students with ADHD but also their peers might experience. Further research that would be based on more large-scale controls and would investigate the effectiveness of various interventions, as well as the profiles of those students who have benefited from particular approaches and those who have not, will generate further evidence concerning the psychoeducation of adolescents with ADHD allowing for generalised conclusions to be drawn.Keywords: adolescence, attention deficit hyperctivity disorder, cognitive behavioural theory, comorbid social emotional behavioural disorders, treatment
Procedia PDF Downloads 319114 Factors Associated with Hand Functional Disability in People with Rheumatoid Arthritis: A Systematic Review and Best-Evidence Synthesis
Authors: Hisham Arab Alkabeya, A. M. Hughes, J. Adams
Abstract:
Background: People with Rheumatoid Arthritis (RA) continue to experience problems with hand function despite new drug advances and targeted medical treatment. Consequently, it is important to identify the factors that influence the impact of RA disease on hand function. This systematic review identified observational studies that reported factors that influenced the impact of RA on hand function. Methods: MEDLINE, EMBASE, CINAL, AMED, PsychINFO, and Web of Science database were searched from January 1990 up to March 2017. Full-text articles published in English that described factors related to hand functional disability in people with RA were selected following predetermined inclusion and exclusion criteria. Pertinent data were thoroughly extracted and documented using a pre-designed data extraction form by the lead author, and cross-checked by the review team for completion and accuracy. Factors related to hand function were classified under the domains of the International Classification of Functioning, Disability, and Health (ICF) framework and health-related factors. Three reviewers independently assessed the methodological quality of the included articles using the quality of cross-sectional studies (AXIS) tool. Factors related to hand function that was investigated in two or more studies were explored using a best-evidence synthesis. Results: Twenty articles form 19 studies met the inclusion criteria from 1,271 citations; all presented cross-sectional data (five high quality and 15 low quality studies), resulting in at best limited evidence in the best-evidence synthesis. For the factors classified under the ICF domains, the best-evidence synthesis indicates that there was a range of body structure and function factors that were related with hand functional disability. However, key factors were hand strength, disease activity, and pain intensity. Low functional status (physical, emotional and social) level was found to be related with limited hand function. For personal factors, there is limited evidence that gender is not related with hand function; whereas, conflicting evidence was found regarding the relationship between age and hand function. In the domain of environmental factors, there was limited evidence that work activity was not related with hand function. Regarding health-related factors, there was limited evidence that the level of the rheumatoid factor (RF) was not related to hand function. Finally, conflicting evidence was found regarding the relationship between hand function and disease duration and general health status. Conclusion: Studies focused on body structure and function factors, highlighting a lack of investigation into personal and environmental factors when considering the impact of RA on hand function. The level of evidence which exists was limited, but identified that modifiable factors such as grip or pinch strength, disease activity and pain are the most influential factors on hand function in people with RA. The review findings suggest that important personal and environmental factors that impact on hand function in people with RA are not yet considered or reported in clinical research. Well-designed longitudinal, preferably cohort, studies are now needed to better understand the causality between personal and environmental factors and hand functional disability in people with RA.Keywords: factors, hand function, rheumatoid arthritis, systematic review
Procedia PDF Downloads 148113 Impact Analysis of a School-Based Oral Health Program in Brazil
Authors: Fabio L. Vieira, Micaelle F. C. Lemos, Luciano C. Lemos, Rafaela S. Oliveira, Ian A. Cunha
Abstract:
Brazil has some challenges ahead related to population oral health, most of them associated with the need of expanding into the local level its promotion and prevention activities, offer equal access to services and promote changes in the lifestyle of the population. The program implemented an oral health initiative in public schools in the city of Salvador, Bahia. The mission was to improve oral health among students on primary and secondary education, from 2 to 15 years old, using the school as a pathway to increase access to healthcare. The main actions consisted of a team's visit to the schools with educational sessions for dental cavity prevention and individual assessment. The program incorporated a clinical surveillance component through a dental evaluation of every student searching for dental disease and caries, standardization of the dentists’ team to reach uniform classification on the assessments, and the use of an online platform to register data directly from the schools. Sequentially, the students with caries were referred for free clinical treatment on the program’s Health Centre. The primary purpose of this study was to analyze the effects and outcomes of this school-based oral health program. The study sample was composed by data of a period of 3 years - 2015 to 2017 - from 13 public schools on the suburb of the city of Salvador with a total number of assessments of 9,278 on this period. From the data collected the prevalence of children with decay on permanent teeth was chosen as the most reliable indicator. The prevalence was calculated for each one of the 13 schools using the number of children with 1 or more dental caries on permanent teeth divided by the total number of students assessed for school each year. Then the percentage change per year was calculated for each school. Some schools presented a higher variation on the total number of assessments in one of the three years, so for these, the percentage change calculation was done using the two years with less variation. The results show that 10 of the 13 schools presented significative improvements for the indicator of caries in permanent teeth. The mean for the number of students with caries percentage reduction on the 13 schools was 26.8%, and the median was 32.2% caries in permanent teeth institution. The highest percentage of improvement reached a decrease of 65.6% on the indicator. Three schools presented a rise in caries prevalence (8.9, 18.9 and 37.2% increase) that, on an initial analysis, seems to be explained with the students’ cohort rotation among other schools, as well as absenteeism on the treatment. In conclusion, the program shows a relevant impact on the reduction of caries in permanent teeth among students and the need for the continuity and expansion of this integrated healthcare approach. It has also been evident the significative of the articulation between health and educational systems representing a fundamental approach to improve healthcare access for children especially in scenarios such as presented in Brazil.Keywords: primary care, public health, oral health, school-based oral health, data management
Procedia PDF Downloads 134112 A Supply Chain Risk Management Model Based on Both Qualitative and Quantitative Approaches
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
In today’s business, it is well-recognized that risk is an important factor that needs to be taken into consideration before a decision is made. Studies indicate that both the number of risks faced by organizations and their potential consequences are growing. Supply chain risk management has become one of the major concerns for practitioners and researchers. Supply chain leaders and scholars are now focusing on the importance of managing supply chain risk. In order to meet the challenge of managing and mitigating supply chain risk (SCR), we must first identify the different dimensions of SCR and assess its relevant probability and severity. SCR has been classified in many different ways, and there are no consistently accepted dimensions of SCRs and several different classifications are reported in the literature. Basically, supply chain risks can be classified into two dimensions namely disruption risk and operational risk. Disruption risks are those caused by events such as bankruptcy, natural disasters and terrorist attack. Operational risks are related to supply and demand coordination and uncertainty, such as uncertain demand and uncertain supply. Disruption risks are rare but severe and hard to manage, while operational risk can be reduced through effective SCM activities. Other SCRs include supply risk, process risk, demand risk and technology risk. In fact, the disorganized classification of SCR has created confusion for SCR scholars. Moreover, practitioners need to identify and assess SCR. As such, it is important to have an overarching framework tying all these SCR dimensions together for two reasons. First, it helps researchers use these terms for communication of ideas based on the same concept. Second, a shared understanding of the SCR dimensions will support the researchers to focus on the more important research objective: operationalization of SCR, which is very important for assessing SCR. In general, fresh food supply chain is subject to certain level of risks, such as supply risk (low quality, delivery failure, hot weather etc.) and demand risk (season food imbalance, new competitors). Effective strategies to mitigate fresh food supply chain risk are required to enhance operations. Before implementing effective mitigation strategies, we need to identify the risk sources and evaluate the risk level. However, assessing the supply chain risk is not an easy matter, and existing research mainly use qualitative method, such as risk assessment matrix. To address the relevant issues, this paper aims to analyze the risk factor of the fresh food supply chain using an approach comprising both fuzzy logic and hierarchical holographic modeling techniques. This novel approach is able to take advantage the benefits of both of these well-known techniques and at the same time offset their drawbacks in certain aspects. In order to develop this integrated approach, substantial research work is needed to effectively combine these two techniques in a seamless way, To validate the proposed integrated approach, a case study in a fresh food supply chain company was conducted to verify the feasibility of its functionality in a real environment.Keywords: fresh food supply chain, fuzzy logic, hierarchical holographic modelling, operationalization, supply chain risk
Procedia PDF Downloads 243111 Resolving Urban Mobility Issues through Network Restructuring of Urban Mass Transport
Authors: Aditya Purohit, Neha Bansal
Abstract:
Unplanned urbanization and multidirectional sprawl of the cities have resulted in increased motorization and deteriorating transport conditions like traffic congestion, longer commuting, pollution, increased carbon footprint, and above all increased fatalities. In order to overcome these problems, various practices have been adopted including– promoting and implementing mass transport; traffic junction channelization; smart transport etc. However, these methods are found to be primarily focusing on vehicular mobility rather than people accessibility. With this research gap, this paper tries to resolve the mobility issues for Ahmedabad city in India, which being the economic capital Gujarat state has a huge commuter and visitor inflow. This research aims to resolve the traffic congestion and urban mobility issues focusing on Gujarat State Regional Transport Corporation (GSRTC) for the city of Ahmadabad by analyzing the existing operations and network structure of GSRTC followed by finding possibilities of integrating it with other modes of urban transport. The network restructuring (NR) methodology is used with appropriate variations, based on commuter demand and growth pattern of the city. To do these ‘scenarios’ based on priority issues (using 12 parameters) and their best possible solution, are established after route network analysis for 2700 population sample of 20 traffic junctions/nodes across the city. Approximately 5% sample (of passenger inflow) at each node is considered using random stratified sampling technique two scenarios are – Scenario 1: Resolving mobility issues by use of Special Purpose Vehicle (SPV) in joint venture to GSRTC and Private Operators for establishing feeder service, which shall provide a transfer service for passenger for movement from inner city area to identified peripheral terminals; and Scenario 2: Augmenting existing mass transport services such as BRTS and AMTS for using them as feeder service to the identified peripheral terminals. Each of these has now been analyzed for the best suitability/feasibility in network restructuring. A desire-line diagram is constructed using this analysis which indicated that on an average 62% of designated GSRTC routes are overlapping with mass transportation service routes of BRTS and AMTS in the city. This has resulted in duplication of bus services causing traffic congestion especially in the Central Bus Station (CBS). Terminating GSRTC services on the periphery of the city is found to be the best restructuring network proposal. This limits the GSRTC buses at city fringe area and prevents them from entering into the city core areas. These end-terminals of GSRTC are integrated with BRTS and AMTS services which help in segregating intra-state and inter-state bus services. The research concludes that absence of integrated multimodal transport network resulted in complexity of transport access to the commuters. As a further scope of research comparing and understanding of value of access time in total travel time and its implication on generalized cost on trip and how it varies city wise may be taken up.Keywords: mass transportation, multi-modal integration, network restructuring, travel behavior, urban transport
Procedia PDF Downloads 198110 Toward the Decarbonisation of EU Transport Sector: Impacts and Challenges of the Diffusion of Electric Vehicles
Authors: Francesca Fermi, Paola Astegiano, Angelo Martino, Stephanie Heitel, Michael Krail
Abstract:
In order to achieve the targeted emission reductions for the decarbonisation of the European economy by 2050, fundamental contributions are required from both energy and transport sectors. The objective of this paper is to analyse the impacts of a largescale diffusion of e-vehicles, either battery-based or fuel cells, together with the implementation of transport policies aiming at decreasing the use of motorised private modes in order to achieve greenhouse gas emission reduction goals, in the context of a future high share of renewable energy. The analysis of the impacts and challenges of future scenarios on transport sector is performed with the ASTRA (ASsessment of TRAnsport Strategies) model. ASTRA is a strategic system-dynamic model at European scale (EU28 countries, Switzerland and Norway), consisting of different sub-modules related to specific aspects: the transport system (e.g. passenger trips, tonnes moved), the vehicle fleet (composition and evolution of technologies), the demographic system, the economic system, the environmental system (energy consumption, emissions). A key feature of ASTRA is that the modules are linked together: changes in one system are transmitted to other systems and can feed-back to the original source of variation. Thanks to its multidimensional structure, ASTRA is capable to simulate a wide range of impacts stemming from the application of transport policy measures: the model addresses direct impacts as well as second-level and third-level impacts. The simulation of the different scenarios is performed within the REFLEX project, where the ASTRA model is employed in combination with several energy models in a comprehensive Modelling System. From the transport sector perspective, some of the impacts are driven by the trend of electricity price estimated from the energy modelling system. Nevertheless, the major drivers to a low carbon transport sector are policies related to increased fuel efficiency of conventional drivetrain technologies, improvement of demand management (e.g. increase of public transport and car sharing services/usage) and diffusion of environmentally friendly vehicles (e.g. electric vehicles). The final modelling results of the REFLEX project will be available from October 2018. The analysis of the impacts and challenges of future scenarios is performed in terms of transport, environmental and social indicators. The diffusion of e-vehicles produces a consistent reduction of future greenhouse gas emissions, although the decarbonisation target can be achieved only with the contribution of complementary transport policies on demand management and supporting the deployment of low-emission alternative energy for non-road transport modes. The paper explores the implications through time of transport policy measures on mobility and environment, underlying to what extent they can contribute to a decarbonisation of the transport sector. Acknowledgements: The results refer to the REFLEX project which has received grants from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 691685.Keywords: decarbonisation, greenhouse gas emissions, e-mobility, transport policies, energy
Procedia PDF Downloads 153109 Reconceptualizing Evidence and Evidence Types for Digital Journalism Studies
Authors: Hai L. Tran
Abstract:
In the digital age, evidence-based reporting is touted as a best practice for seeking the truth and keeping the public well-informed. Journalists are expected to rely on evidence to demonstrate the validity of a factual statement and lend credence to an individual account. Evidence can be obtained from various sources, and due to a rich supply of evidence types available, the definition of this important concept varies semantically. To promote clarity and understanding, it is necessary to break down the various types of evidence and categorize them in a more coherent, systematic way. There is a wide array of devices that digital journalists deploy as proof to back up or refute a truth claim. Evidence can take various formats, including verbal and visual materials. Verbal evidence encompasses quotes, soundbites, talking heads, testimonies, voice recordings, anecdotes, and statistics communicated through written or spoken language. There are instances where evidence is simply non-verbal, such as when natural sounds are provided without any verbalized words. On the other hand, other language-free items exhibited in photos, video footage, data visualizations, infographics, and illustrations can serve as visual evidence. Moreover, there are different sources from which evidence can be cited. Supporting materials, such as public or leaked records and documents, data, research studies, surveys, polls, or reports compiled by governments, organizations, and other entities, are frequently included as informational evidence. Proof can also come from human sources via interviews, recorded conversations, public and private gatherings, or press conferences. Expert opinions, eye-witness insights, insider observations, and official statements are some of the common examples of testimonial evidence. Digital journalism studies tend to make broad references when comparing qualitative versus quantitative forms of evidence. Meanwhile, limited efforts are being undertaken to distinguish between sister terms, such as “data,” “statistical,” and “base-rate” on one side of the spectrum and “narrative,” “anecdotal,” and “exemplar” on the other. The present study seeks to develop the evidence taxonomy, which classifies evidence through the quantitative-qualitative juxtaposition and in a hierarchical order from broad to specific. According to this scheme, data, statistics, and base rate belong to the quantitative evidence group, whereas narrative, anecdote, and exemplar fall into the qualitative evidence group. Subsequently, the taxonomical classification arranges data versus narrative at the top of the hierarchy of types of evidence, followed by statistics versus anecdote and base rate versus exemplar. This research reiterates the central role of evidence in how journalists describe and explain social phenomena and issues. By defining the various types of evidence and delineating their logical connections it helps remove a significant degree of conceptual inconsistency, ambiguity, and confusion in digital journalism studies.Keywords: evidence, evidence forms, evidence types, taxonomy
Procedia PDF Downloads 67108 Encephalon-An Implementation of a Handwritten Mathematical Expression Solver
Authors: Shreeyam, Ranjan Kumar Sah, Shivangi
Abstract:
Recognizing and solving handwritten mathematical expressions can be a challenging task, particularly when certain characters are segmented and classified. This project proposes a solution that uses Convolutional Neural Network (CNN) and image processing techniques to accurately solve various types of equations, including arithmetic, quadratic, and trigonometric equations, as well as logical operations like logical AND, OR, NOT, NAND, XOR, and NOR. The proposed solution also provides a graphical solution, allowing users to visualize equations and their solutions. In addition to equation solving, the platform, called CNNCalc, offers a comprehensive learning experience for students. It provides educational content, a quiz platform, and a coding platform for practicing programming skills in different languages like C, Python, and Java. This all-in-one solution makes the learning process engaging and enjoyable for students. The proposed methodology includes horizontal compact projection analysis and survey for segmentation and binarization, as well as connected component analysis and integrated connected component analysis for character classification. The compact projection algorithm compresses the horizontal projections to remove noise and obtain a clearer image, contributing to the accuracy of character segmentation. Experimental results demonstrate the effectiveness of the proposed solution in solving a wide range of mathematical equations. CNNCalc provides a powerful and user-friendly platform for solving equations, learning, and practicing programming skills. With its comprehensive features and accurate results, CNNCalc is poised to revolutionize the way students learn and solve mathematical equations. The platform utilizes a custom-designed Convolutional Neural Network (CNN) with image processing techniques to accurately recognize and classify symbols within handwritten equations. The compact projection algorithm effectively removes noise from horizontal projections, leading to clearer images and improved character segmentation. Experimental results demonstrate the accuracy and effectiveness of the proposed solution in solving a wide range of equations, including arithmetic, quadratic, trigonometric, and logical operations. CNNCalc features a user-friendly interface with a graphical representation of equations being solved, making it an interactive and engaging learning experience for users. The platform also includes tutorials, testing capabilities, and programming features in languages such as C, Python, and Java. Users can track their progress and work towards improving their skills. CNNCalc is poised to revolutionize the way students learn and solve mathematical equations with its comprehensive features and accurate results.Keywords: AL, ML, hand written equation solver, maths, computer, CNNCalc, convolutional neural networks
Procedia PDF Downloads 122107 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 23106 Temporal Changes Analysis (1960-2019) of a Greek Rural Landscape
Authors: Stamatia Nasiakou, Dimitrios Chouvardas, Michael Vrahnakis, Vassiliki Kleftoyanni
Abstract:
Recent research in the mountainous and semi-mountainous rural landscapes of Greece shows that they have been significantly changed over the last 80 years. These changes have the form of structural modification of land cover/use patterns, with the main characteristic being the extensive expansion of dense forests and shrubs at the expense of grasslands and extensive agricultural areas. The aim of this research was to study the 60-year changes (1960-2019) of land cover/ use units in the rural landscape of Mouzaki (Karditsa Prefecture, central Greece). Relevant cartographic material such as forest land use maps, digital maps (Corine Land Cover -2018), 1960 aerial photos from Hellenic Military Geographical Service, and satellite imagery (Google Earth Pro 2014, 2016, 2017 and 2019) was collected and processed in order to study landscape evolution. ArcGIS v 10.2.2 software was used to process the cartographic material and to produce several sets of data. Main product of the analysis was a digitized photo-mosaic of the 1960 aerial photographs, a digitized photo-mosaic of recent satellite images (2014, 2016, 2017 and 2019), and diagrams and maps of temporal transformation of the rural landscape (1960 – 2019). Maps and diagrams were produced by applying photointerpretation techniques and a suitable land cover/ use classification system on the two photo-mosaics. Demographic and socioeconomic inventory data was also collected mainly from diachronic census reports of the Hellenic Statistical Authority and local sources. Data analysis of the temporal transformation of land cover/ use units showed that they are mainly located in the central and south-eastern part of the study area, which mainly includes the mountainous part of the landscape. The most significant change is the expansion of the dense forests that currently dominate the southern and eastern part of the landscape. In conclusion, the produced diagrams and maps of the land cover/ use evolution suggest that woody vegetation in the rural landscape of Mouzaki has significantly increased over the past 60 years at the expense of the open areas, especially grasslands and agricultural areas. Demographic changes, land abandonment and the transformation of traditional farming practices (e.g. agroforestry) were recognized as the main cause of the landscape change. This study is part of a broader research project entitled “Perspective of Agroforestry in Thessaly region: A research on social, environmental and economic aspects to enhance farmer participation”. The project is funded by the General Secretariat for Research and Technology (GSRT) and the Hellenic Foundation for Research and Innovation (HFRI).Keywords: Agroforestry, Forest expansion, Land cover/ use changes, Mountainous and semi-mountainous areas
Procedia PDF Downloads 108105 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 135104 Identification of Hub Genes in the Development of Atherosclerosis
Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia
Abstract:
Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics
Procedia PDF Downloads 66103 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security
Authors: D. Pugazhenthi, B. Sree Vidya
Abstract:
Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification
Procedia PDF Downloads 259102 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study
Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria
Abstract:
Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations
Procedia PDF Downloads 117101 “Self-Torturous Thresholds” in Post-WWII Japan: Three Thresholds to Queer Japanese Futures
Authors: Maari Sugawara
Abstract:
This arts-based research is about "self-torture": the interplay of seemingly opposing elements of pain, pleasure, submission, and power. It asserts that "self-torture" can be considered a nontrivial mediation between the aesthetic and the sociopolitical. It explores what the author calls queered self-torture; "self-torture" marked by an ambivalence that allows the oppressed to resist, and their counter-valorization occasionally functions as therapeutic solutions to the problems they highlight and condense. The research goal is to deconstruct normative self-torture and propose queered self-torture as a fertile ground for considering the complexities of desire that allow the oppressed to practice freedom. While “self-torture” manifests in many societies, this research focuses on cultural and national identity in post-WWII Japan using this lens of self-torture, as masochism functions as the very basis for Japanese cultural and national identity to ensure self-preservation. This masochism is defined as an impulse to realize a sense of pride and construct an identity through the acceptance of subordination, shame, and humiliation in the face of an all-powerful Other; the dominant Euro-America. It could be argued that this self-torture is a result of Japanese cultural annihilation and the trauma of the nation's defeat to the US. This is the definition of "self-torturous thresholds," the author’s post-WWII Japan psycho-historical diagnosis; when this threshold is crossed, the oppressed begin to torture themselves; the oppressors no longer need to do anything to maintain their power. The oppressed are already oppressing themselves. The term "oppressed" here refers to Japanese individuals and residents of Japan who are subjected to oppressive “white” heteropatriarchal supremacist structures and values that serve colonialist interests. There are three stages in "self-torturous thresholds": (1) the oppressors no longer need to oppress because the oppressed voluntarily commit to self-torture; (2) the oppressed find pleasure in self-torture; and (3) the oppressed achieve queered self-torture, to achieve alternative futures. Using the conceptualization of "self-torture," this research examines and critiques pleasure, desire, capital, and power in postwar Japan, which enables the discussion of the data-colonizing “Moonshot Research and Development program”. If the oppressed want to divest from the habits of normative self-torture, which shape what is possible in both our present and future, we need methods to feel and know that the alternative results of self-torture are possible. Phase three will be enacted using Sarah Ahmed's queer methodology to reorient national and cultural identity away from heteronormativity. Through theoretical analysis, textual analysis, archival research, ethnographic interviews, and digital art projects, including experimental documentary as a method to capture the realities of the individuals who are practicing self-torture, this research seeks to reveal how self-torture may become not just a vehicle of pleasure but also a mode of critiquing power and achieving freedom. It seeks to encourage the imaginings of queer Japanese futures, where the marginalized survive Japan’s natural and man-made disasters and Japan’s Imperialist past and present rather than submitting to the country’s continued violence.Keywords: arts-based research, Japanese studies, interdisciplinary arts, queer studies, cultural studies, popular culture, BDSM, sadomasochism, sexuality, VR, AR, digital art, visual arts, speculative fiction
Procedia PDF Downloads 72100 Investigation of Attitude of Production Workers towards Job Rotation in Automotive Industry against the Background of Demographic Change
Authors: Franciska Weise, Ralph Bruder
Abstract:
Due to the demographic change in Germany along with the declining birth rate and the increasing age of population, the share of older people in society is rising. This development is also reflected in the work force of German companies. Therefore companies should focus on improving ergonomics, especially in the area of age-related work design. Literature shows that studies on age-related work design have been carried out in the past, some of whose results have been put into practice. However, there is still a need for further research. One of the most important methods for taking into account the needs of an aging population is job rotation. This method aims at preventing or reducing health risks and inappropriate physical strain. It is conceived as a systematic change of workplaces within a group. Existing literature does not cover any methods for the investigation of the attitudes of employees towards job rotation. However, in order to evaluate job rotation, it is essential to have knowledge of the views of people towards rotation. In addition to an investigation of attitudes, the design of rotation plays a crucial role. The sequence of activities and the rotation frequency influence the worker and as well the work result. The evaluation of preliminary talks on the shop floor showed that team speakers and foremen share a common understanding of job rotation. In practice, different varieties of job rotation exist. One important aspect is the frequency of rotation. It is possible to rotate never, more than one time or even during every break, or more often than every break. It depends on the opportunity or possibility to rotate whenever workers want to rotate. From the preliminary talks some challenges can be derived. For example a rotation in the whole team is not possible, if a team member requires to be trained for a new task. In order to be able to determine the relation of the design and the attitude towards job rotation, a questionnaire is carried out in the vehicle manufacturing. The questionnaire will be employed to determine the different varieties of job rotation that exist in production, as well as the attitudes of workers towards those different frequencies of job rotation. In addition, younger and older employees will be compared with regard to their rotation frequency and their attitudes towards rotation. There are three kinds of age groups. Three questions are under examination. The first question is whether older employees rotate less frequently than younger employees. Also it is investigated to know whether the frequency of job rotation and the attitude towards the frequency of job rotation are interconnected. Moreover, the attitudes of the different age groups towards the frequency of rotation will be examined. Up to now 144 employees, all working in production, took part in the survey. 36.8 % were younger than thirty, 37.5 % were between thirty und forty-four and 25.7 % were above forty-five years old. The data shows no difference between the three age groups in relation to the frequency of job rotation (N=139, median=4, Chi²=.859, df=2, p=.651). Most employees rotate between six and seven workplaces per day. In addition there is a statistically significant correlation between the frequency of job rotation and the attitude towards the frequency (Spearman-Rho: 2-sided=.008, correlation coefficient=.223). Less than four workplaces per day are not enough for the employees. The third question, which differences can be found between older and younger people who rotate in a different way and with different attitudes towards job rotation, cannot be possible answered. Till now the data shows that younger people would like to rotate very often. Regarding to older people no correlation can be found with acceptable significance. The results of the survey will be used to improve the current practice of job rotation. In addition, the discussions during the survey are expected to help sensitize the employees with respect to rotation issues, and to contribute to optimizing rotation by means of qualification and an improved design of job rotation. Together with the employees and the results of the survey there must be found standards which show how to rotate in an ergonomic way while consider the attitude towards job rotation.Keywords: job rotation, age-related work design, questionnaire, automotive industry
Procedia PDF Downloads 30399 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling
Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci
Abstract:
Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.Keywords: land use, spatial resolution, WRF-Chem, air quality assessment
Procedia PDF Downloads 15898 Double Liposomes Based Dual Drug Delivery System for Effective Eradication of Helicobacter pylori
Authors: Yuvraj Singh Dangi, Brajesh Kumar Tiwari, Ashok Kumar Jain, Kamta Prasad Namdeo
Abstract:
The potential use of liposomes as drug carriers by i.v. injection is limited by their low stability in blood stream. Firstly, phospholipid exchange and transfer to lipoproteins, mainly HDL destabilizes and disintegrates liposomes with subsequent loss of content. To avoid the pain associated with injection and to obtain better patient compliance studies concerning various dosage forms, have been developed. Conventional liposomes (unilamellar and multilamellar) have certain drawbacks like low entrapment efficiency, stability and release of drug after single breach in external membrane, have led to the new type of liposomal systems. The challenge has been successfully met in the form of Double Liposomes (DL). DL is a recently developed type of liposome, consisting of smaller liposomes enveloped in lipid bilayers. The outer lipid layer of DL can protect inner liposomes against various enzymes, therefore DL was thought to be more effective than ordinary liposomes. This concept was also supported by in vitro release characteristics i.e. DL formation inhibited the release of drugs encapsulated in inner liposomes. DL consists of several small liposomes encapsulated in large liposomes, i.e., multivesicular vesicles (MVV), therefore, DL should be discriminated from ordinary classification of multilamellar vesicles (MLV), large unilamellar vesicles (LUV), small unilamellar vesicles (SUV). However, for these liposomes, the volume of inner phase is small and loading volume of water-soluble drugs is low. In the present study, the potential of phosphatidylethanolamine (PE) lipid anchored double liposomes (DL) to incorporate two drugs in a single system is exploited as a tool to augment the H. pylori eradication rate. Preparation of DL involves two steps, first formation of primary (inner) liposomes by thin film hydration method containing one drug, then addition of suspension of inner liposomes on thin film of lipid containing the other drug. The success of formation of DL was characterized by optical and transmission electron microscopy. Quantitation of DL-bacterial interaction was evaluated in terms of percent growth inhibition (%GI) on reference strain of H. pylori ATCC 26695. To confirm specific binding efficacy of DL to H. pylori PE surface receptor we performed an agglutination assay. Agglutination in DL treated H. pylori suspension suggested selectivity of DL towards the PE surface receptor of H. pylori. Monotherapy is generally not recommended for treatment of a H. pylori infection due to the danger of development of resistance and unacceptably low eradication rates. Therefore, combination therapy with amoxicillin trihydrate (AMOX) as anti-H. pylori agent and ranitidine bismuth citrate (RBC) as antisecretory agent were selected for the study with an expectation that this dual-drug delivery approach will exert acceptable anti-H. pylori activity.Keywords: Helicobacter pylorI, amoxicillin trihydrate, Ranitidine Bismuth citrate, phosphatidylethanolamine, multi vesicular systems
Procedia PDF Downloads 20797 Clinical Presentation and Immune Response to Intramammary Infection of Holstein-Friesian Heifers with Isolates from Two Staphylococcus aureus Lineages
Authors: Dagmara A. Niedziela, Mark P. Murphy, Orla M. Keane, Finola C. Leonard
Abstract:
Staphylococcus aureus is the most frequent cause of clinical and subclinical bovine mastitis in Ireland. Mastitis caused by S. aureus is often chronic and tends to recur after antibiotic treatment. This may be due to several virulence factors, including attributes that enable the bacterium to internalize into bovine mammary epithelial cells, where it may evade antibiotic treatment, or evade the host immune response. Four bovine-adapted lineages (CC71, CC97, CC151 and ST136) were identified among a collection of Irish S. aureus mastitis isolates. Genotypic variation of mastitis-causing strains may contribute to different presentations of the disease, including differences in milk somatic cell count (SCC), the main method of mastitis detection. The objective of this study was to investigate the influence of bacterial strain and lineage on host immune response, by employing cell culture methods in vitro as well as an in vivo infection model. Twelve bovine adapted S. aureus strains were examined for internalization into bovine mammary epithelial cells (bMEC) and their ability to induce an immune response from bMEC (using qPCR and ELISA). In vitro studies found differences in a variety of virulence traits between the lineages. Strains from lineages CC97 and CC71 internalized more efficiently into bovine mammary epithelial cells (bMEC) than CC151 and ST136. CC97 strains also induced immune genes in bMEC more strongly than strains from the other 3 lineages. One strain each of CC151 and CC97 that differed in their ability to cause an immune response in bMEC were selected on the basis of the above in vitro experiments. Fourteen first-lactation Holstein-Friesian cows were purchased from 2 farms on the basis of low SCC (less than 50 000 cells/ml) and infection free status. Seven cows were infected with 1.73 x 102 c.f.u. of the CC97 strain (Group 1) and another seven with 5.83 x 102 c.f.u. of the CC151 strain (Group 2). The contralateral quarter of each cow was inoculated with PBS (vehicle). Clinical signs of infection (temperature, milk and udder appearance, milk yield) were monitored for 30 days. Blood and milk samples were taken to determine bacterial counts in milk, SCC, white blood cell populations and cytokines. Differences in disease presentation in vivo between groups were observed, with two animals from Group 2 developing clinical mastitis and requiring antibiotic treatment, while one animal from Group 1 did not develop an infection for the duration of the study. Fever (temperature > 39.5⁰C) was observed in 3 animals from Group 2 and in none from Group 1. Significant differences in SCC and bacterial load between groups were observed in the initial stages of infection (week 1). Data is also being collected on cytokines and chemokines secreted during the course of infection. The results of this study suggest that a strain from lineage CC151 may cause more severe clinical mastitis, while a strain from lineage CC97 may cause mild, subclinical mastitis. Diversity between strains of S. aureus may therefore influence the clinical presentation of mastitis, which in turn may influence disease detection and treatment needs.Keywords: Bovine mastitis, host immune response, host-pathogen interactions, Staphylococcus aureus
Procedia PDF Downloads 15796 Factors Affecting Early Antibiotic Delivery in Open Tibial Shaft Fractures
Authors: William Elnemer, Nauman Hussain, Samir Al-Ali, Henry Shu, Diane Ghanem, Babar Shafiq
Abstract:
Introduction: The incidence of infection in open tibial shaft injuries varies depending on the severity of the injury, with rates ranging from 1.8% for Gustilo-Anderson type I to 42.9% for type IIIB fractures. The timely administration of antibiotics upon presentation to the emergency department (ED) is an essential component of fracture management, and evidence indicates that prompt delivery of antibiotics is associated with improved outcomes. The objective of this study is to identify factors that contribute to the expedient administration of antibiotics. Methods: This is a retrospective study of open tibial shaft fractures at an academic Level I trauma center. Current Procedural Terminology (CPT) codes identified all patients treated for open tibial shaft fractures between 2015 and 2021. Open fractures were identified by reviewing ED and provider notes, and with ballistic fractures were considered open. Chart reviews were performed to extract demographics, fracture characteristics, postoperative outcomes, time to operative room, time to antibiotic order, and delivery. Univariate statistical analysis compared patients who received early antibiotics (EA), which were delivered within one hour of ED presentation, and those who received late antibiotics (LA), which were delivered outside of one hour of ED presentation. A multivariate analysis was performed to investigate patient, fracture, and transport/ED characteristics contributing to faster delivery of antibiotics. The multivariate analysis included the dependent variables: ballistic fracture, activation of Delta Trauma, Gustilo-Andersen (Type III vs. Type I and II), AO-OTA Classification (Type C vs. Type A and B), arrival between 7 am and 11 pm, and arrival via Emergency Medical Services (EMS) or walk-in. Results: Seventy ED patients with open tibial shaft fractures were identified. Of these, 39 patients (55.7%) received EA, while 31 patients (44.3%) received LA. Univariate analysis shows that the arrival via EMS as opposed to walk-in (97.4% vs. 74.2%, respectively, p = 0.01) and activation of Delta Trauma (89.7% vs. 51.6%, respectively, p < 0.001) was significantly higher in the EA group vs. the LA group. Additionally, EA cases had significantly shorter intervals between the antibiotic order and delivery when compared to LA cases (0.02 hours vs. 0.35 hours, p = 0.007). No other significant differences were found in terms of postoperative outcomes or fracture characteristics. Multivariate analysis shows that a Delta Trauma Response, arrival via EMS, and presentation between 7 am and 11 pm were independent predictors of a shorter time to antibiotic administration (Odds Ratio = 11.9, 30.7, and 5.4, p = 0.001, 0.016, and 0.013, respectively). Discussion: Earlier antibiotic delivery is associated with arrival to the ED between 7 am and 11 pm, arrival via EMS, and a coordinated Delta Trauma activation. Our findings indicate that in cases where administering antibiotics is critical to achieving positive outcomes, it is advisable to employ a coordinated Delta Trauma response. Hospital personnel should be attentive to the rapid administration of antibiotics to patients with open fractures who arrive via walk-in or during late-night hours.Keywords: antibiotics, emergency department, fracture management, open tibial shaft fractures, orthopaedic surgery, time to or, trauma fractures
Procedia PDF Downloads 6595 Diagnosis, Treatment, and Prognosis in Cutaneous Anaplastic Lymphoma Kinase-Positive Anaplastic Large Cell Lymphoma: A Narrative Review Apropos of a Case
Authors: Laura Gleason, Sahithi Talasila, Lauren Banner, Ladan Afifi, Neda Nikbakht
Abstract:
Primary cutaneous anaplastic large cell lymphoma (pcALCL) accounts for 9% of all cutaneous T-cell lymphomas. pcALCL is classically characterized as a solitary papulonodule that often enlarges, ulcerates, and can be locally destructive, but overall exhibits an indolent course with overall 5-year survival estimated to be 90%. Distinguishing pcALCL from systemic ALCL (sALCL) is essential as sALCL confers a poorer prognosis with average 5-year survival being 40-50%. Although extremely rare, there have been several cases of ALK-positive ALCL diagnosed on skin biopsy without evidence of systemic involvement, which poses several challenges in the classification, prognostication, treatment, and follow-up of these patients. Objectives: We present a case of cutaneous ALK-positive ALCL without evidence of systemic involvement, and a narrative review of the literature to further characterize that ALK-positive ALCL limited to the skin is a distinct variant with a unique presentation, history, and prognosis. A 30-year-old woman presented for evaluation of an erythematous-violaceous papule present on her right chest for two months. With the development of multifocal disease and persistent lymphadenopathy, a bone marrow biopsy and lymph node excisional biopsy were performed to assess for systemic disease. Both biopsies were unrevealing. The patient was counseled on pursuing systemic therapy consisting of Brentuximab, Cyclophosphamide, Doxorubicin, and Prednisone given the concern for sALCL. Apropos of the patient we searched for clinically evident, cutaneous ALK-positive ALCL cases, with and without systemic involvement, in the English literature. Risk factors, such as tumor location, number, size, ALK localization, ALK translocations, and recurrence, were evaluated in cases of cutaneous ALK-positive ALCL. The majority of patients with cutaneous ALK-positive ALCL did not progress to systemic disease. The majority of cases that progressed to systemic disease in adults had recurring skin lesions and cytoplasmic localization of ALK. ALK translocations did not influence disease progression. Mean time to disease progression was 16.7 months, and significant mortality (50%) was observed in those cases that progressed to systemic disease. Pediatric cases did not exhibit a trend similar to adult cases. In both the adult and pediatric cases, a subset of cutaneous-limited ALK-positive ALCL were treated with chemotherapy. All cases treated with chemotherapy did not progress to systemic disease. Apropos of an ALK-positive ALCL patient with clinical cutaneous limited disease in the histologic presence of systemic markers, we discussed the literature data, highlighting the crucial issues related to developing a clinical strategy to approach this rare subtype of ALCL. Physicians need to be aware of the overall spectrum of ALCL, including cutaneous limited disease, systemic disease, disease with NPM-ALK translocation, disease with ALK and EMA positivity, and disease with skin recurrence.Keywords: anaplastic large cell lymphoma, systemic, cutaneous, anaplastic lymphoma kinase, ALK, ALCL, sALCL, pcALCL, cALCL
Procedia PDF Downloads 8394 Navigating the Future: Evaluating the Market Potential and Drivers for High-Definition Mapping in the Autonomous Vehicle Era
Authors: Loha Hashimy, Isabella Castillo
Abstract:
In today's rapidly evolving technological landscape, the importance of precise navigation and mapping systems cannot be understated. As various sectors undergo transformative changes, the market potential for Advanced Mapping and Management Systems (AMMS) emerges as a critical focus area. The Galileo/GNSS-Based Autonomous Mobile Mapping System (GAMMS) project, specifically targeted toward high-definition mapping (HDM), endeavours to provide insights into this market within the broader context of the geomatics and navigation fields. With the growing integration of Autonomous Vehicles (AVs) into our transportation systems, the relevance and demand for sophisticated mapping solutions like HDM have become increasingly pertinent. The research employed a meticulous, lean, stepwise, and interconnected methodology to ensure a comprehensive assessment. Beginning with the identification of pivotal project results, the study progressed into a systematic market screening. This was complemented by an exhaustive desk research phase that delved into existing literature, data, and trends. To ensure the holistic validity of the findings, extensive consultations were conducted. Academia and industry experts provided invaluable insights through interviews, questionnaires, and surveys. This multi-faceted approach facilitated a layered analysis, juxtaposing secondary data with primary inputs, ensuring that the conclusions were both accurate and actionable. Our investigation unearthed a plethora of drivers steering the HD maps landscape. These ranged from technological leaps, nuanced market demands, and influential economic factors to overarching socio-political shifts. The meteoric rise of Autonomous Vehicles (AVs) and the shift towards app-based transportation solutions, such as Uber, stood out as significant market pull factors. A nuanced PESTEL analysis further enriched our understanding, shedding light on political, economic, social, technological, environmental, and legal facets influencing the HD maps market trajectory. Simultaneously, potential roadblocks were identified. Notable among these were barriers related to high initial costs, concerns around data quality, and the challenges posed by a fragmented and evolving regulatory landscape. The GAMMS project serves as a beacon, illuminating the vast opportunities that lie ahead for the HD mapping sector. It underscores the indispensable role of HDM in enhancing navigation, ensuring safety, and providing pinpoint, accurate location services. As our world becomes more interconnected and reliant on technology, HD maps emerge as a linchpin, bridging gaps and enabling seamless experiences. The research findings accentuate the imperative for stakeholders across industries to recognize and harness the potential of HD mapping, especially as we stand on the cusp of a transportation revolution heralded by Autonomous Vehicles and advanced geomatic solutions.Keywords: high-definition mapping (HDM), autonomous vehicles, PESTEL analysis, market drivers
Procedia PDF Downloads 8493 Scenarios of Digitalization and Energy Efficiency in the Building Sector in Brazil: 2050 Horizon
Authors: Maria Fatima Almeida, Rodrigo Calili, George Soares, João Krause, Myrthes Marcele Dos Santos, Anna Carolina Suzano E. Silva, Marcos Alexandre Da
Abstract:
In Brazil, the building sector accounts for 1/6 of energy consumption and 50% of electricity consumption. A complex sector with several driving actors plays an essential role in the country's economy. Currently, the digitalization readiness in this sector is still low, mainly due to the high investment costs and the difficulty of estimating the benefits of digital technologies in buildings. Nevertheless, the potential contribution of digitalization for increasing energy efficiency in the building sector in Brazil has been pointed out as relevant in the political and sectoral contexts, both in the medium and long-term horizons. To contribute to the debate on the possible evolving trajectories of digitalization in the building sector in Brazil and to subsidize the formulation or revision of current public policies and managerial decisions, three future scenarios were created to anticipate the potential energy efficiency in the building sector in Brazil due to digitalization by 2050. This work aims to present these scenarios as a basis to foresight the potential energy efficiency in this sector, according to different digitalization paces - slow, moderate, or fast in the 2050 horizon. A methodological approach was proposed to create alternative prospective scenarios, combining the Global Business Network (GBN) and the Laboratory for Investigation in Prospective Strategy and Organisation (LIPSOR) methods. This approach consists of seven steps: (i) definition of the question to be foresighted and time horizon to be considered (2050); (ii) definition and classification of a set of key variables, using the prospective structural analysis; (iii) identification of the main actors with an active role in the digital and energy spheres; (iv) characterization of the current situation (2021) and identification of main uncertainties that were considered critical in the development of alternative future scenarios; (v) scanning possible futures using morphological analysis; (vi) selection and description of the most likely scenarios; (vii) foresighting the potential energy efficiency in each of the three scenarios, namely slow digitalization; moderate digitalization, and fast digitalization. Each scenario begins with a core logic and then encompasses potentially related elements, including potential energy efficiency. Then, the first scenario refers to digitalization at a slow pace, with induction by the government limited to public buildings. In the second scenario, digitalization is implemented at a moderate pace, induced by the government in public, commercial, and service buildings, through regulation integrating digitalization and energy efficiency mechanisms. Finally, in the third scenario, digitalization in the building sector is implemented at a fast pace in the country and is strongly induced by the government, but with broad participation of private investments and accelerated adoption of digital technologies. As a result of the slow pace of digitalization in the sector, the potential for energy efficiency stands at levels below 10% of the total of 161TWh by 2050. In the moderate digitalization scenario, the potential reaches 20 to 30% of the total 161TWh by 2050. Furthermore, in the rapid digitalization scenario, it will reach 30 to 40% of the total 161TWh by 2050.Keywords: building digitalization, energy efficiency, scenario building, prospective structural analysis, morphological analysis
Procedia PDF Downloads 11592 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 7891 Interval Functional Electrical Stimulation Cycling and Nutritional Counseling Improves Lean Mass to Fat Mass Ratio and Decreases Cardiometabolic Disease Risk in Individuals with Spinal Cord Injury
Authors: David Dolbow, Daniel Credeur, Mujtaba Rahimi, Dobrivoje Stokic, Jennifer Lemacks, Andrew Courtner
Abstract:
Introduction: Obesity is at epidemic proportions in the spinal cord injury (SCI) population (66-75%), as individuals who suffer from paralysis undergo a dramatic decrease in muscle mass and a dramatic increase in adipose deposition. Obesity is a major public health concern which includes a doubling of the risk of heart disease, stroke and type II diabetes mellitus. It has been demonstrated that physical activity, and especially HIIT, can promote a healthy body composition and decrease the risk cardiometabolic disease in the able-bodied population. However, SCI typically limits voluntary exercise to the arms, but a high prevalence of shoulder pain in persons with chronic SCI (60-90%) can cause increased arm exercise to be problematic. Functional electrical stimulation (FES) cycling has proven to be a safe and effective way to exercise paralyzed leg muscles in clinical and home settings, saving the often overworked arms. Yet, HIIT-FES cycling had not been investigated prior to the current study. The purpose of this study was to investigate the body composition changes with combined HIIT-FES cycling and nutritional counseling on individuals with SCI. Design: A matched (level of injury, time since injury, body mass index) and controlled trail. Setting: University exercise performance laboratory. Subjects: Ten individuals with chronic SCI (C5-T9) ASIA impairment classification (A & B) were divided into the treatment group (n=5) for 30 minutes of HIIT-FES cycling 3 times per week for 8 weeks and nutritional counseling over the phone for 30 minutes once per week for 8 weeks and the control group (n=5) who received nutritional counseling only. Results: There was a statistically significant difference between the HIIT-FES group and the control group in mean body fat percentage change (-1.14 to +0.24) respectively, p = .030). There was also a statistically significant difference between the HIIT-FES and control groups in mean change in legs lean mass (+0.78 kg to -1.5 kg) respectively, p = 0.004. There was a nominal decrease in weight, BMI, total fat mass and a nominal increase in total lean mass for the HIIT-FES group over the control group. However, these changes were not found to be statistically significant. Additionally, there was a nominal decrease in the mean blood glucose levels for both groups 101.8 to 97.8 mg/dl for the HIIT-FES group and 94.6 to 93 mg/dl for the Nutrition only group, however, neither were found to be statistically significant. Conclusion: HIIT-FES cycling combined with nutritional counseling can provide healthful body composition changes including decreased body fat percentage in just 8 weeks. Future study recommendations include a greater number of participants, a primer electrical stimulation exercise program to better ready participants for HIIT-FES cycling and a greater volume of training above 30 minutes, 3 times per week for 8 weeks.Keywords: body composition, functional electrical stimulation cycling, high-intensity interval training, spinal cord injury
Procedia PDF Downloads 11690 Analysis of Taxonomic Compositions, Metabolic Pathways and Antibiotic Resistance Genes in Fish Gut Microbiome by Shotgun Metagenomics
Authors: Anuj Tyagi, Balwinder Singh, Naveen Kumar B. T., Niraj K. Singh
Abstract:
Characterization of diverse microbial communities in specific environment plays a crucial role in the better understanding of their functional relationship with the ecosystem. It is now well established that gut microbiome of fish is not the simple replication of microbiota of surrounding local habitat, and extensive species, dietary, physiological and metabolic variations in fishes may have a significant impact on its composition. Moreover, overuse of antibiotics in human, veterinary and aquaculture medicine has led to rapid emergence and propagation of antibiotic resistance genes (ARGs) in the aquatic environment. Microbial communities harboring specific ARGs not only get a preferential edge during selective antibiotic exposure but also possess the significant risk of ARGs transfer to other non-resistance bacteria within the confined environments. This phenomenon may lead to the emergence of habitat-specific microbial resistomes and subsequent emergence of virulent antibiotic-resistant pathogens with severe fish and consumer health consequences. In this study, gut microbiota of freshwater carp (Labeo rohita) was investigated by shotgun metagenomics to understand its taxonomic composition and functional capabilities. Metagenomic DNA, extracted from the fish gut, was subjected to sequencing on Illumina NextSeq to generate paired-end (PE) 2 x 150 bp sequencing reads. After the QC of raw sequencing data by Trimmomatic, taxonomic analysis by Kraken2 taxonomic sequence classification system revealed the presence of 36 phyla, 326 families and 985 genera in the fish gut microbiome. At phylum level, Proteobacteria accounted for more than three-fourths of total bacterial populations followed by Actinobacteria (14%) and Cyanobacteria (3%). Commonly used probiotic bacteria (Bacillus, Lactobacillus, Streptococcus, and Lactococcus) were found to be very less prevalent in fish gut. After sequencing data assembly by MEGAHIT v1.1.2 assembler and PROKKA automated analysis pipeline, pathway analysis revealed the presence of 1,608 Metacyc pathways in the fish gut microbiome. Biosynthesis pathways were found to be the most dominant (51%) followed by degradation (39%), energy-metabolism (4%) and fermentation (2%). Almost one-third (33%) of biosynthesis pathways were involved in the synthesis of secondary metabolites. Metabolic pathways for the biosynthesis of 35 antibiotic types were also present, and these accounted for 5% of overall metabolic pathways in the fish gut microbiome. Fifty-one different types of antibiotic resistance genes (ARGs) belonging to 15 antimicrobial resistance (AMR) gene families and conferring resistance against 24 antibiotic types were detected in fish gut. More than 90% ARGs in fish gut microbiome were against beta-lactams (penicillins, cephalosporins, penems, and monobactams). Resistance against tetracycline, macrolides, fluoroquinolones, and phenicols ranged from 0.7% to 1.3%. Some of the ARGs for multi-drug resistance were also found to be located on sequences of plasmid origin. The presence of pathogenic bacteria and ARGs on plasmid sequences suggested the potential risk due to horizontal gene transfer in the confined gut environment.Keywords: antibiotic resistance, fish gut, metabolic pathways, microbial diversity
Procedia PDF Downloads 14489 Efficient Utilization of Negative Half Wave of Regulator Rectifier Output to Drive Class D LED Headlamp
Authors: Lalit Ahuja, Nancy Das, Yashas Shetty
Abstract:
LED lighting has been increasingly adopted for vehicles in both domestic and foreign automotive markets. Although this miniaturized technology gives the best light output, low energy consumption, and cost-efficient solutions for driving, the same is the need of the hour. In this paper, we present a methodology for driving the highest class two-wheeler headlamp with regulator and rectifier (RR) output. Unlike usual LED headlamps, which are driven by a battery, regulator, and rectifier (RR) driven, a low-cost and highly efficient LED Driver Module (LDM) is proposed. The positive half of magneto output is regulated and used to charge batteries used for various peripherals. While conventionally, the negative half was used for operating bulb-based exterior lamps. But with advancements in LED-based headlamps, which are driven by a battery, this negative half pulse remained unused in most of the vehicles. Our system uses negative half-wave rectified DC output from RR to provide constant light output at all RPMs of the vehicle. With the negative rectified DC output of RR, we have the advantage of pulsating DC input which periodically goes to zero, thus helping us to generate a constant DC output equivalent to the required LED load, and with a change in RPM, additional active thermal bypass circuit help us to maintain the efficiency and thermal rise. The methodology uses the negative half wave output of the RR along with a linear constant current driver with significantly higher efficiency. Although RR output has varied frequency and duty cycles at different engine RPMs, the driver is designed such that it provides constant current to LEDs with minimal ripple. In LED Headlamps, a DC-DC switching regulator is usually used, which is usually bulky. But with linear regulators, we’re eliminating bulky components and improving the form factor. Hence, this is both cost-efficient and compact. Presently, output ripple-free amplitude drivers with fewer components and less complexity are limited to lower-power LED Lamps. The focus of current high-efficiency research is often on high LED power applications. This paper presents a method of driving LED load at both High Beam and Low Beam using the negative half wave rectified pulsating DC from RR with minimum components, maintaining high efficiency within the thermal limitations. Linear regulators are significantly inefficient, with efficiencies typically about 40% and reaching as low as 14%. This leads to poor thermal performance. Although they don’t require complex and bulky circuitry, powering high-power devices is difficult to realise with the same. But with the input being negative half wave rectified pulsating DC, this efficiency can be improved as this helps us to generate constant DC output equivalent to LED load minimising the voltage drop on the linear regulator. Hence, losses are significantly reduced, and efficiency as high as 75% is achieved. With a change in RPM, DC voltage increases, which can be managed by active thermal bypass circuitry, thus resulting in better thermal performance. Hence, the use of bulky and expensive heat sinks can be avoided. Hence, the methodology to utilize the unused negative pulsating DC output of RR to optimize the utilization of RR output power and provide a cost-efficient solution as compared to costly DC-DC drivers.Keywords: class D LED headlamp, regulator and rectifier, pulsating DC, low cost and highly efficient, LED driver module
Procedia PDF Downloads 6788 Clinicomycological Pattern of Superficial Fungal Infections among Primary School Children in Communities in Enugu, Nigeria
Authors: Nkeiruka Elsie Ezomike, Chinwe L. Onyekonwu, Anthony N. Ikefuna, Bede C. Ibe
Abstract:
Superficial fungal infections (SFIs) are one of the common cutaneous infections that affect children worldwide. They may lead to school absenteeism or school drop-out and hence setback in the education of the child. Community-based studies in any locality are good reflections of the health conditions within that area. There is a dearth of information in the literature about SFI among primary school children in Enugu. This study aimed to determine the clinicomycological pattern of SFIs among primary school children in rural and urban communities in Enugu. This was a comparative descriptive cross-sectional study among primary school children in Awgu (rural) and Enugu North (urban) Local Government Areas (LGAs). Subjects' selection was made over 6 months using a multi-stage sampling method. Information such as age, sex, parental education, and occupation were collected using questionnaires. Socioeconomic classes of the children were determined using the classification proposed by Oyedeji et al. The samples were collected from subjects with SFIs. Potassium hydroxide tests were done on the samples. The samples that tested positive were cultured for SFI by inoculating onto Sabouraud's dextrose chloramphenicol actidione agar. The characteristics of the isolates were identified according to their morphological features using Mycology Online, Atlas 2000, and Mycology Review 2003. Equal numbers of children were recruited from the two LGAs. A total of 1662 pupils were studied. The mean ages of the study subjects were 9.03 ± 2.10years in rural and 10.46 ± 2.33years in urban communities. The male to female ratio was 1.6:1 in rural and 1:1.1 in urban communities. The personal hygiene of the children was significantly related to the presence of SFIs. The overall prevalence of SFIs among the study participants was 45%. In the rural, the prevalence was 29.6%, and in the urban prevalence was 60.4%. The types of SFIs were tinea capitis (the commonest), tinea corporis, pityriasis Versicolor, tinea unguium, and tinea manuum with prevalence rates lower in rural than urban communities. The clinical patterns were gray patch and black dot type of non-inflammatory tinea capitis, kerion, tinea corporis with trunk and limb distributions, and pityriasis Versicolor with face, trunk and limb distributions. Gray patch was the most frequent pattern of SFI seen in rural and urban communities. Black dot type was more frequent in rural than urban communities. SFIs were frequent among children aged 5 to 8years in rural and 9 to 12 years in urban communities. SFIs were commoner in males in the rural, whereas female dominance was observed in the urban. SFIs were more in children from low social class and those with poor hygiene. Trichophyton tonsurans and Trichophyton soudanese were the common mycological isolates in rural and urban communities, respectively. In conclusion, SFIs were less prevalent in rural than in urban communities. Trichophyton species were the most common fungal isolates in the communities. Health education of mothers and their children on SFI and good personal hygiene will reduce the incidence of SFIs.Keywords: clinicomycological pattern, communities, primary school children, superficial fungal infections
Procedia PDF Downloads 12587 Integrating Data Mining within a Strategic Knowledge Management Framework: A Platform for Sustainable Competitive Advantage within the Australian Minerals and Metals Mining Sector
Authors: Sanaz Moayer, Fang Huang, Scott Gardner
Abstract:
In the highly leveraged business world of today, an organisation’s success depends on how it can manage and organize its traditional and intangible assets. In the knowledge-based economy, knowledge as a valuable asset gives enduring capability to firms competing in rapidly shifting global markets. It can be argued that ability to create unique knowledge assets by configuring ICT and human capabilities, will be a defining factor for international competitive advantage in the mid-21st century. The concept of KM is recognized in the strategy literature, and increasingly by senior decision-makers (particularly in large firms which can achieve scalable benefits), as an important vehicle for stimulating innovation and organisational performance in the knowledge economy. This thinking has been evident in professional services and other knowledge intensive industries for over a decade. It highlights the importance of social capital and the value of the intellectual capital embedded in social and professional networks, complementing the traditional focus on creation of intellectual property assets. Despite the growing interest in KM within professional services there has been limited discussion in relation to multinational resource based industries such as mining and petroleum where the focus has been principally on global portfolio optimization with economies of scale, process efficiencies and cost reduction. The Australian minerals and metals mining industry, although traditionally viewed as capital intensive, employs a significant number of knowledge workers notably- engineers, geologists, highly skilled technicians, legal, finance, accounting, ICT and contracts specialists working in projects or functions, representing potential knowledge silos within the organisation. This silo effect arguably inhibits knowledge sharing and retention by disaggregating corporate memory, with increased operational and project continuity risk. It also may limit the potential for process, product, and service innovation. In this paper the strategic application of knowledge management incorporating contemporary ICT platforms and data mining practices is explored as an important enabler for knowledge discovery, reduction of risk, and retention of corporate knowledge in resource based industries. With reference to the relevant strategy, management, and information systems literature, this paper highlights possible connections (currently undergoing empirical testing), between an Strategic Knowledge Management (SKM) framework incorporating supportive Data Mining (DM) practices and competitive advantage for multinational firms operating within the Australian resource sector. We also propose based on a review of the relevant literature that more effective management of soft and hard systems knowledge is crucial for major Australian firms in all sectors seeking to improve organisational performance through the human and technological capability captured in organisational networks.Keywords: competitive advantage, data mining, mining organisation, strategic knowledge management
Procedia PDF Downloads 41586 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting
Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan
Abstract:
El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index
Procedia PDF Downloads 154