Search results for: fractional edge domination number
117 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 134116 Quality in Healthcare: An Autism-Friendly Hospital Emergency Waiting Room
Authors: Elena Bellini, Daniele Mugnaini, Michele Boschetto
Abstract:
People with an Autistic Spectrum Disorder and an Intellectual Disability who need to attend a Hospital Emergency Waiting Room frequently present high levels of discomfort and challenging behaviors due to stress-related hyperarousal, sensory sensitivity, novelty-anxiety, communication and self-regulation difficulties. Increased agitation and acting out also disturb the diagnostic and therapeutic processes, and the emergency room climate. Architectural design disciplines aimed at reducing distress in hospitals or creating autism-friendly environments are called for to find effective answers to this particular need. A growing number of researchers are considering the physical environment as an important point of intervention for people with autism. It has been shown that providing the right setting can help enhance confidence and self-esteem and can have a profound impact on their health and wellbeing. Environmental psychology has evaluated the perceived quality of care, looking at the design of hospital rooms, paths and circulation, waiting rooms, services and devices. Furthermore, many studies have investigated the influence of the hospital environment on patients, in terms of stress-reduction and therapeutic intervention’ speed, but also on health professionals and their work. Several services around the world are organizing autism-friendly hospital environments which involve the architecture and the specific staff training. In Italy, the association Spes contra spem has promoted and published, in 2013, the ‘Chart of disabled people in the hospital’. It stipulates that disabled people should have equal rights to accessible and high-quality care. There are a few Italian examples of therapeutic programmes for autistic people as the Dama project in Milan and the recent experience of Children and Autism Foundation in Pordenone. Careggi’s Emergency Waiting Room in Florence has been built to satisfy this challenge. This project of research comes from a collaboration between the technical staff of Careggi Hospital, the Center for autism PAMAPI and some architects expert in the sensory environment. The methodology of focus group involved architects, psychologists and professionals through a transdisciplinary research, centered on the links between the spatial characteristics and clinical state of people with ASD. The relationship between architectural space and quality of life is studied to pay maximum attention to users’ needs and to support the medical staff in their work by a specific program of training. The result of this research is a sum of criteria used to design the emergency waiting room, that will be illustrated. A protected room, with a clear space design, maximizes comprehension and predictability. The multisensory environment is thought to help sensory integration and relaxation. Visual communication through Ipad allows an anticipated understanding of medical procedures, and a specific technological system supports requests, choices and self-determination in order to fit sensory stimulation to personal preferences, especially for hypo and hypersensitive people. All these characteristics should ensure a better regulation of the arousal, less behavior problems, improving treatment accessibility, safety, and effectiveness. First results about patient-satisfaction levels will be presented.Keywords: accessibility of care, autism-friendly architecture, personalized therapeutic process, sensory environment
Procedia PDF Downloads 268115 Distributed Listening in Intensive Care: Nurses’ Collective Alarm Responses Unravelled through Auditory Spatiotemporal Trajectories
Authors: Michael Sonne Kristensen, Frank Loesche, James Foster, Elif Ozcan, Judy Edworthy
Abstract:
Auditory alarms play an integral role in intensive care nurses’ daily work. Most medical devices in the intensive care unit (ICU) are designed to produce alarm sounds in order to make nurses aware of immediate or prospective safety risks. The utilisation of sound as a carrier of crucial patient information is highly dependent on nurses’ presence - both physically and mentally. For ICU nurses, especially the ones who work with stationary alarm devices at the patient bed space, it is a challenge to display ‘appropriate’ alarm responses at all times as they have to navigate with great flexibility in a complex work environment. While being primarily responsible for a small number of allocated patients they are often required to engage with other nurses’ patients, relatives, and colleagues at different locations inside and outside the unit. This work explores the social strategies used by a team of nurses to comprehend and react to the information conveyed by the alarms in the ICU. Two main research questions guide the study: To what extent do alarms from a patient bed space reach the relevant responsible nurse by direct auditory exposure? By which means do responsible nurses get informed about their patients’ alarms when not directly exposed to the alarms? A comprehensive video-ethnographic field study was carried out to capture and evaluate alarm-related events in an ICU. The study involved close collaboration with four nurses who wore eye-level cameras and ear-level binaural audio recorders during several work shifts. At all time the entire unit was monitored by multiple video and audio recorders. From a data set of hundreds of hours of recorded material information about the nurses’ location, social interaction, and alarm exposure at any point in time was coded in a multi-channel replay-interface. The data shows that responsible nurses’ direct exposure and awareness of the alarms of their allocated patients vary significantly depending on work load, social relationships, and the location of the patient’s bed space. Distributed listening is deliberately employed by the nursing team as a social strategy to respond adequately to alarms, but the patterns of information flow prompted by alarm-related events are not uniform. Auditory Spatiotemporal Trajectory (AST) is proposed as a methodological label to designate the integration of temporal, spatial and auditory load information. As a mixed-method metrics it provides tangible evidence of how nurses’ individual alarm-related experiences differ from one another and from stationary points in the ICU. Furthermore, it is used to demonstrate how alarm-related information reaches the individual nurse through principles of social and distributed cognition, and how that information relates to the actual alarm event. Thereby it bridges a long-standing gap in the literature on medical alarm utilisation between, on the one hand, initiatives to measure objective data of the medical sound environment without consideration for any human experience, and, on the other hand, initiatives to study subjective experiences of the medical sound environment without detailed evidence of the objective characteristics of the environment.Keywords: auditory spatiotemporal trajectory, medical alarms, social cognition, video-ethography
Procedia PDF Downloads 191114 The Use of Artificial Intelligence in the Context of a Space Traffic Management System: Legal Aspects
Authors: George Kyriakopoulos, Photini Pazartzis, Anthi Koskina, Crystalie Bourcha
Abstract:
The need for securing safe access to and return from outer space, as well as ensuring the viability of outer space operations, maintains vivid the debate over the promotion of organization of space traffic through a Space Traffic Management System (STM). The proliferation of outer space activities in recent years as well as the dynamic emergence of the private sector has gradually resulted in a diverse universe of actors operating in outer space. The said developments created an increased adverse impact on outer space sustainability as the case of the growing number of space debris clearly demonstrates. The above landscape sustains considerable threats to outer space environment and its operators that need to be addressed by a combination of scientific-technological measures and regulatory interventions. In this context, recourse to recent technological advancements and, in particular, to Artificial Intelligence (AI) and machine learning systems, could achieve exponential results in promoting space traffic management with respect to collision avoidance as well as launch and re-entry procedures/phases. New technologies can support the prospects of a successful space traffic management system at an international scale by enabling, inter alia, timely, accurate and analytical processing of large data sets and rapid decision-making, more precise space debris identification and tracking and overall minimization of collision risks and reduction of operational costs. What is more, a significant part of space activities (i.e. launch and/or re-entry phase) takes place in airspace rather than in outer space, hence the overall discussion also involves the highly developed, both technically and legally, international (and national) Air Traffic Management System (ATM). Nonetheless, from a regulatory perspective, the use of AI for the purposes of space traffic management puts forward implications that merit particular attention. Key issues in this regard include the delimitation of AI-based activities as space activities, the designation of the applicable legal regime (international space or air law, national law), the assessment of the nature and extent of international legal obligations regarding space traffic coordination, as well as the appropriate liability regime applicable to AI-based technologies when operating for space traffic coordination, taking into particular consideration the dense regulatory developments at EU level. In addition, the prospects of institutionalizing international cooperation and promoting an international governance system, together with the challenges of establishment of a comprehensive international STM regime are revisited in the light of intervention of AI technologies. This paper aims at examining regulatory implications advanced by the use of AI technology in the context of space traffic management operations and its key correlating concepts (SSA, space debris mitigation) drawing in particular on international and regional considerations in the field of STM (e.g. UNCOPUOS, International Academy of Astronautics, European Space Agency, among other actors), the promising advancements of the EU approach to AI regulation and, last but not least, national approaches regarding the use of AI in the context of space traffic management, in toto. Acknowledgment: The present work was co-funded by the European Union and Greek national funds through the Operational Program "Human Resources Development, Education and Lifelong Learning " (NSRF 2014-2020), under the call "Supporting Researchers with an Emphasis on Young Researchers – Cycle B" (MIS: 5048145).Keywords: artificial intelligence, space traffic management, space situational awareness, space debris
Procedia PDF Downloads 261113 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 181112 Membrane Permeability of Middle Molecules: A Computational Chemistry Approach
Authors: Sundaram Arulmozhiraja, Kanade Shimizu, Yuta Yamamoto, Satoshi Ichikawa, Maenaka Katsumi, Hiroaki Tokiwa
Abstract:
Drug discovery is shifting from small molecule based drugs targeting local active site to middle molecules (MM) targeting large, flat, and groove-shaped binding sites, for example, protein-protein interface because at least half of all targets assumed to be involved in human disease have been classified as “difficult to drug” with traditional small molecules. Hence, MMs such as peptides, natural products, glycans, nucleic acids with various high potent bioactivities become important targets for drug discovery programs in the recent years as they could be used for ‘undruggable” intracellular targets. Cell membrane permeability is one of the key properties of pharmacodynamically active MM drug compounds and so evaluating this property for the potential MMs is crucial. Computational prediction for cell membrane permeability of molecules is very challenging; however, recent advancement in the molecular dynamics simulations help to solve this issue partially. It is expected that MMs with high membrane permeability will enable drug discovery research to expand its borders towards intracellular targets. Further to understand the chemistry behind the permeability of MMs, it is necessary to investigate their conformational changes during the permeation through membrane and for that their interactions with the membrane field should be studied reliably because these interactions involve various non-bonding interactions such as hydrogen bonding, -stacking, charge-transfer, polarization dispersion, and non-classical weak hydrogen bonding. Therefore, parameters-based classical mechanics calculations are hardly sufficient to investigate these interactions rather, quantum mechanical (QM) calculations are essential. Fragment molecular orbital (FMO) method could be used for such purpose as it performs ab initio QM calculations by dividing the system into fragments. The present work is aimed to study the cell permeability of middle molecules using molecular dynamics simulations and FMO-QM calculations. For this purpose, a natural compound syringolin and its analogues were considered in this study. Molecular simulations were performed using NAMD and Gromacs programs with CHARMM force field. FMO calculations were performed using the PAICS program at the correlated Resolution-of-Identity second-order Moller Plesset (RI-MP2) level with the cc-pVDZ basis set. The simulations clearly show that while syringolin could not permeate the membrane, its selected analogues go through the medium in nano second scale. These correlates well with the existing experimental evidences that these syringolin analogues are membrane-permeable compounds. Further analyses indicate that intramolecular -stacking interactions in the syringolin analogues influenced their permeability positively. These intramolecular interactions reduce the polarity of these analogues so that they could permeate the lipophilic cell membrane. Conclusively, the cell membrane permeability of various middle molecules with potent bioactivities is efficiently studied using molecular dynamics simulations. Insight of this behavior is thoroughly investigated using FMO-QM calculations. Results obtained in the present study indicate that non-bonding intramolecular interactions such as hydrogen-bonding and -stacking along with the conformational flexibility of MMs are essential for amicable membrane permeation. These results are interesting and are nice example for this theoretical calculation approach that could be used to study the permeability of other middle molecules. This work was supported by Japan Agency for Medical Research and Development (AMED) under Grant Number 18ae0101047.Keywords: fragment molecular orbital theory, membrane permeability, middle molecules, molecular dynamics simulation
Procedia PDF Downloads 189111 The Use of Non-Parametric Bootstrap in Computing of Microbial Risk Assessment from Lettuce Consumption Irrigated with Contaminated Water by Sanitary Sewage in Infulene Valley
Authors: Mario Tauzene Afonso Matangue, Ivan Andres Sanchez Ortiz
Abstract:
The Metropolitan area of Maputo (Mozambique Capital City) is located in semi-arid zone (800 mm annual rainfall) with 1101170 million inhabitants. On the west side, there are the flatlands of Infulene where the Mulauze River flows towards to the Indian Ocean, receiving at this site, the storm water contaminated with sanitary sewage from Maputo, transported through a concrete open channel. In Infulene, local communities grow salads crops such as tomato, onion, garlic, lettuce, and cabbage, which are then commercialized and consumed in several markets in Maputo City. Lettuce is the most daily consumed salad crop in different meals, generally in fast-foods, breakfasts, lunches, and dinners. However, the risk of infection by several pathogens due to the consumption of lettuce, using the Quantitative Microbial Risk Assessment (QMRA) tools, is still unknown since there are few studies or publications concerning to this matter in Mozambique. This work is aimed at determining the annual risk arising from the consumption of lettuce grown in Infulene valley, in Maputo, using QMRA tools. The exposure model was constructed upon the volume of contaminated water remaining in the lettuce leaves, the empirical relations between the number of pathogens and the indicator of microorganisms (E. coli), the consumption of lettuce (g) and reduction of pathogens (days). The reference pathogens were Vibrio cholerae, Cryptosporidium, norovirus, and Ascaris. The water quality samples (E. coli) were collected in the storm water channel from January 2016 to December 2018, comprising 65 samples, and the urban lettuce consumption data were collected through inquiry in Maputo Metropolis covering 350 persons. A non-parametric bootstrap was performed involving 10,000 iterations over the collected dataset, namely, water quality (E. coli) and lettuce consumption. The dose-response models were: Exponential for Cryptosporidium, Kummer Confluent hypergeomtric function (1F1) for Vibrio and Ascaris Gaussian hypergeometric function (2F1-(a,b;c;z) for norovirus. The annual infection risk estimates were performed using R 3.6.0 (CoreTeam) software by Monte Carlo (Latin hypercubes), a sampling technique involving 10,000 iterations. The annual infection risks values expressed by Median and the 95th percentile, per person per year (pppy) arising from the consumption of lettuce are as follows: Vibrio cholerae (1.00, 1.00), Cryptosporidium (3.91x10⁻³, 9.72x 10⁻³), nororvirus (5.22x10⁻¹, 9.99x10⁻¹) and Ascaris (2.59x10⁻¹, 9.65x10⁻¹). Thus, the consumption of the lettuce would result in greater risks than the tolerable levels ( < 10⁻³ pppy or 10⁻⁶ DALY) for all pathogens, and the Vibrio cholerae is the most virulent pathogens, according to the hit-single models followed by the Ascaris lumbricoides and norovirus. The sensitivity analysis carried out in this work pointed out that in the whole QMRA, the most important input variable was the reduction of pathogens (Spearman rank value was 0.69) between harvest and consumption followed by water quality (Spearman rank value was 0.69). The decision-makers (Mozambique Government) must strengthen the prevention measures related to pathogens reduction in lettuce (i.e., washing) and engage in wastewater treatment engineering.Keywords: annual infections risk, lettuce, non-parametric bootstrapping, quantitative microbial risk assessment tools
Procedia PDF Downloads 122110 Planning Railway Assets Renewal with a Multiobjective Approach
Authors: João Coutinho-Rodrigues, Nuno Sousa, Luís Alçada-Almeida
Abstract:
Transportation infrastructure systems are fundamental in modern society and economy. However, they need modernizing, maintaining, and reinforcing interventions which require large investments. In many countries, accumulated intervention delays arise from aging and intense use, being magnified by financial constraints of the past. The decision problem of managing the renewal of large backlogs is common to several types of important transportation infrastructures (e.g., railways, roads). This problem requires considering financial aspects as well as operational constraints under a multidimensional framework. The present research introduces a linear programming multiobjective model for managing railway infrastructure asset renewal. The model aims at minimizing three objectives: (i) yearly investment peak, by evenly spreading investment throughout multiple years; (ii) total cost, which includes extra maintenance costs incurred from renewal backlogs; (iii) priority delays related to work start postponements on the higher priority railway sections. Operational constraints ensure that passenger and freight services are not excessively delayed from having railway line sections under intervention. Achieving a balanced annual investment plan, without compromising the total financial effort or excessively postponing the execution of the priority works, was the motivation for pursuing the research which is now presented. The methodology, inspired by a real case study and tested with real data, reflects aspects of the practice of an infrastructure management company and is generalizable to different types of infrastructure (e.g., railways, highways). It was conceived for treating renewal interventions in infrastructure assets, which is a railway network may be rails, ballasts, sleepers, etc.; while a section is under intervention, trains must run at reduced speed, causing delays in services. The model cannot, therefore, allow for an accumulation of works on the same line, which may cause excessively large delays. Similarly, the lines do not all have the same socio-economic importance or service intensity, making it is necessary to prioritize the sections to be renewed. The model takes these issues into account, and its output is an optimized works schedule for the renewal project translatable in Gantt charts The infrastructure management company provided all the data for the first test case study and validated the parameterization. This case consists of several sections to be renewed, over 5 years and belonging to 17 lines. A large instance was also generated, reflecting a problem of a size similar to the USA railway network (considered the largest one in the world), so it is not expected that considerably larger problems appear in real life; an average of 25 years backlog and ten years of project horizon was considered. Despite the very large increase in the number of decision variables (200 times as large), the computational time cost did not increase very significantly. It is thus expectable that just about any real-life problem can be treated in a modern computer, regardless of size. The trade-off analysis shows that if the decision maker allows some increase in max yearly investment (i.e., degradation of objective ii), solutions improve considerably in the remaining two objectives.Keywords: transport infrastructure, asset renewal, railway maintenance, multiobjective modeling
Procedia PDF Downloads 146109 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators
Authors: K. O'Malley
Abstract:
Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university
Procedia PDF Downloads 34108 Organization Structure of Towns and Villages System in County Area Based on Fractal Theory and Gravity Model: A Case Study of Suning, Hebei Province, China
Authors: Liuhui Zhu, Peng Zeng
Abstract:
With the rapid development in China, the urbanization has entered the transformation and promotion stage, and its direction of development has shifted to overall regional synergy. China has a large number of towns and villages, with comparative small scale and scattered distribution, which always support and provide resources to cities leading to urban-rural opposition, so it is difficult to achieve common development in a single town or village. In this context, the regional development should focus more on towns and villages to form a synergetic system, joining the regional association with cities. Thus, the paper raises the question about how to effectively organize towns and villages system to regulate the resource allocation and improve the comprehensive value of the regional area. To answer the question, it is necessary to find a suitable research unit and analysis of its present situation of towns and villages system for optimal development. By combing relevant researches and theoretical models, the county is the most basic administrative unit in China, which can directly guide and regulate the development of towns and villages, so the paper takes county as the research unit. Following the theoretical concept of ‘three structures and one network’, the paper concludes the research framework to analyse the present situation of towns and villages system, including scale structure, functional structure, spatial structure, and organization network. The analytical methods refer to the fractal theory and gravity model, using statistics and spatial data. The scale structure analyzes rank-size dimensions and uses the principal component method to calculate the comprehensive scale of towns and villages. The functional structure analyzes the functional types and industrial development of towns and villages. The spatial structure analyzes the aggregation dimension, network dimension, and correlation dimension of spatial elements to represent the overall spatial relationships. In terms of organization network, from the perspective of entity and ono-entity, the paper analyzes the transportation network and gravitational network. Based on the present situation analysis, the optimization strategies are proposed in order to achieve a synergetic relationship between towns and villages in the county area. The paper uses Suning county in the Beijing-Tianjin-Hebei region as a case study to apply the research framework and methods and then proposes the optimization orientations. The analysis results indicate that: (1) The Suning county is lack of medium-scale towns to transfer effect from towns to villages. (2) The distribution of gravitational centers is uneven, and the effect of gravity is limited only for nearby towns and villages. The gravitational network is not complete, leading to economic activities scattered and isolated. (3) The overall development of towns and villages system is immature, staying at ‘single heart and multi-core’ stage, and some specific optimization strategies are proposed. This study provides a regional view for the development of towns and villages and concludes the research framework and methods of towns and villages system for forming an effective synergetic relationship between them, contributing to organize resources and stimulate endogenous motivation, and form counter magnets to join the urban-rural integration.Keywords: towns and villages system, organization structure, county area, fractal theory, gravity model
Procedia PDF Downloads 138107 First Attempts Using High-Throughput Sequencing in Senecio from the Andes
Authors: L. Salomon, P. Sklenar
Abstract:
The Andes hold the highest plant species diversity in the world. How this occurred is one of the most intriguing questions in studies addressing the origin and patterning of plant diversity worldwide. Recently, the explosive adaptive radiations found in high Andean groups have been pointed as triggers to this spectacular diversity. The Andes is the species-richest area for the biggest genus from the Asteraceae family: Senecio. There, the genus presents an incredible diversity of species, striking growth form variation, and large niche span. Even when some studies tried to disentangle the evolutionary story for some Andean species in Senecio, they obtained partially resolved and low supported phylogenies, as expected for recently radiated groups. The high-throughput sequencing (HTS) approaches have proved to be a powerful tool answering phylogenetic questions in those groups whose evolutionary stories are recent and traditional techniques like Sanger sequencing are not informative enough. Although these tools have been used to understand the evolution of an increasing number of Andean groups, nowadays, their scope has not been applied for Senecio. This project aims to contribute to a better knowledge of the mechanisms shaping the hyper diversity of Senecio in the Andean region, using HTS focusing on Senecio ser. Culcitium (Asteraceae), recently recircumscribed. Firstly, reconstructing a highly resolved and supported phylogeny, and after assessing the role of allopatric differentiation, hybridization, and genome duplication in the diversification of the group. Using the Hyb-Seq approach, combining target enrichment using Asteraceae COS loci baits and genome skimming, more than 100 new accessions were generated. HybPhyloMaker and HybPiper pipelines were used for the phylogenetic analyses, and another pipeline in development (Paralogue Wizard) was used to deal with paralogues. RAxML was used to generate gene trees and Astral for species tree reconstruction. Phyparts were used to explore as first step of gene tree discordance along the clades. Fully resolved with moderated supported trees were obtained, showing Senecio ser. Culcitium as monophyletic. Within the group, some species formed well-supported clades with morphologically related species, while some species would not have exclusive ancestry, in concordance with previous studies using amplified fragment length polymorphism (AFLP) showing geographical differentiation. Discordance between gene trees was detected. Paralogues were detected for many loci, indicating possible genome duplications; ploidy level estimation using flow cytometry will be carried out during the next months in order to identify the role of this process in the diversification of the group. Likewise, TreeSetViz package for Mesquite, hierarchical likelihood ratio congruence test using Concaterpillar, and Procrustean Approach to Cophylogeny (PACo), will be used to evaluate the congruence among different inheritance patterns. In order to evaluate the influence of hybridization and Incomplete Lineage Sorting (ILS) in each resultant clade from the phylogeny, Joly et al.'s 2009 method in a coalescent scenario and Paterson’s D-statistic will be performed. Even when the main discordance sources between gene trees were not explored in detail yet, the data show that at least to some degree, processes such as genome duplication, hybridization, and/or ILS could be involved in the evolution of the group.Keywords: adaptive radiations, Andes, genome duplication, hybridization, Senecio
Procedia PDF Downloads 140106 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis
Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon
Abstract:
Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles
Procedia PDF Downloads 388105 Meta-Analysis of Previously Unsolved Cases of Aviation Mishaps Employing Molecular Pathology
Authors: Michael Josef Schwerer
Abstract:
Background: Analyzing any aircraft accident is mandatory based on the regulations of the International Civil Aviation Organization and the respective country’s criminal prosecution authorities. Legal medicine investigations are unavoidable when fatalities involve the flight crew or when doubts arise concerning the pilot’s aeromedical health status before the event. As a result of frequently tremendous blunt and sharp force trauma along with the impact of the aircraft to the ground, consecutive blast or fire exposition of the occupants or putrefaction of the dead bodies in cases of delayed recovery, relevant findings can be masked or destroyed and therefor being inaccessible in standard pathology practice comprising just forensic autopsy and histopathology. Such cases are of considerable risk of remaining unsolved without legal consequences for those responsible. Further, no lessons can be drawn from these scenarios to improve flight safety and prevent future mishaps. Aims and Methods: To learn from previously unsolved aircraft accidents, re-evaluations of the investigation files and modern molecular pathology studies were performed. Genetic testing involved predominantly PCR-based analysis of gene regulation, studying DNA promotor methylations, RNA transcription and posttranscriptional regulation. In addition, the presence or absence of infective agents, particularly DNA- and RNA-viruses, was studied. Technical adjustments of molecular genetic procedures when working with archived sample material were necessary. Standards for the proper interpretation of the respective findings had to be settled. Results and Discussion: Additional molecular genetic testing significantly contributes to the quality of forensic pathology assessment in aviation mishaps. Previously undetected cardiotropic viruses potentially explain e.g., a pilot’s sudden incapacitation resulting from cardiac failure or myocardial arrhythmia. In contrast, negative results for infective agents participate in ruling out concerns about an accident pilot’s fitness to fly and the aeromedical examiner’s precedent decision to issue him or her an aeromedical certificate. Care must be taken in the interpretation of genetic testing for pre-existing diseases such as hypertrophic cardiomyopathy or ischemic heart disease. Molecular markers such as mRNAs or miRNAs, which can establish these diagnoses in clinical patients, might be misleading in-flight crew members because of adaptive changes in their tissues resulting from repeated mild hypoxia during flight, for instance. Military pilots especially demonstrate significant physiological adjustments to their somatic burdens in flight, such as cardiocirculatory stress and air combat maneuvers. Their non-pathogenic alterations in gene regulation and expression will likely be misinterpreted for genuine disease by inexperienced investigators. Conclusions: The growing influence of molecular pathology on legal medicine practice has found its way into aircraft accident investigation. As appropriate quality standards for laboratory work and data interpretation are provided, forensic genetic testing supports the medico-legal analysis of aviation mishaps and potentially reduces the number of unsolved events in the future.Keywords: aviation medicine, aircraft accident investigation, forensic pathology, molecular pathology
Procedia PDF Downloads 47104 Impact of Increased Radiology Staffing on After-Hours Radiology Reporting Efficiency and Quality
Authors: Peregrine James Dalziel, Philip Vu Tran
Abstract:
Objective / Introduction: Demand for radiology services from Emergency Departments (ED) continues to increase with greater demands placed on radiology staff providing reports for the management of complex cases. Queuing theory indicates that wide variability of process time with the random nature of request arrival increases the probability of significant queues. This can lead to delays in the time-to-availability of radiology reports (TTA-RR) and potentially impaired ED patient flow. In addition, greater “cognitive workload” of greater volume may lead to reduced productivity and increased errors. We sought to quantify the potential ED flow improvements obtainable from increased radiology providers serving 3 public hospitals in Melbourne Australia. We sought to assess the potential productivity gains, quality improvement and the cost-effectiveness of increased labor inputs. Methods & Materials: The Western Health Medical Imaging Department moved from single resident coverage on weekend days 8:30 am-10:30 pm to a limited period of 2 resident coverage 1 pm-6 pm on both weekend days. The TTA-RR for weekend CT scans was calculated from the PACs database for the 8 month period symmetrically around the date of staffing change. A multivariate linear regression model was developed to isolate the improvement in TTA-RR, between the two 4-months periods. Daily and hourly scan volume at the time of each CT scan was calculated to assess the impact of varying department workload. To assess any improvement in report quality/errors a random sample of 200 studies was assessed to compare the average number of clinically significant over-read addendums to reports between the 2 periods. Cost-effectiveness was assessed by comparing the marginal cost of additional staffing against a conservative estimate of the economic benefit of improved ED patient throughput using the Australian national insurance rebate for private ED attendance as a revenue proxy. Results: The primary resident on call and the type of scan accounted for most of the explained variability in time to report availability (R2=0.29). Increasing daily volume and hourly volume was associated with increased TTA-RR (1.5m (p<0.01) and 4.8m (p<0.01) respectively per additional scan ordered within each time frame. Reports were available 25.9 minutes sooner on average in the 4 months post-implementation of double coverage (p<0.01) with additional 23.6 minutes improvement when 2 residents were on-site concomitantly (p<0.01). The aggregate average improvement in TTA-RR was 24.8 hours per weekend day This represents the increased decision-making time available to ED physicians and potential improvement in ED bed utilisation. 5% of reports from the intervention period contained clinically significant addendums vs 7% in the single resident period but this was not statistically significant (p=0.7). The marginal cost was less than the anticipated economic benefit based assuming a 50% capture of improved TTA-RR inpatient disposition and using the lowest available national insurance rebate as a proxy for economic benefit. Conclusion: TTA-RR improved significantly during the period of increased staff availability, both during the specific period of increased staffing and throughout the day. Increased labor utilisation is cost-effective compared with the potential improved productivity for ED cases requiring CT imaging.Keywords: workflow, quality, administration, CT, staffing
Procedia PDF Downloads 113103 Long-Term Tillage, Lime Matter and Cover Crop Effects under Heavy Soil Conditions in Northern Lithuania
Authors: Aleksandras Velykis, Antanas Satkus
Abstract:
Clay loam and clay soils are typical for northern Lithuania. These soils are susceptible to physical degradation in the case of intensive use of heavy machinery for field operations. However, clayey soils having poor physical properties by origin require more intensive tillage to maintain proper physical condition for grown crops. Therefore not only choice of suitable tillage system is very important for these soils in the region, but also additional search of other measures is essential for good soil physical state maintenance. Research objective: To evaluate the long-term effects of different intensity tillage as well as its combinations with supplementary agronomic practices on improvement of soil physical conditions and environmental sustainability. The experiment examined the influence of deep and shallow ploughing, ploughless tillage, combinations of ploughless tillage with incorporation of lime sludge and cover crop for green manure and application of the same cover crop for mulch without autumn tillage under spring and winter crop growing conditions on clay loam (27% clay, 50% silt, 23% sand) Endocalcaric Endogleyic Cambisol. Methods: The indicators characterizing the impact of investigated measures were determined using the following methods and devices: Soil dry bulk density – by Eijkelkamp cylinder (100 cm3), soil water content – by weighing, soil structure – by Retsch sieve shaker, aggregate stability – by Eijkelkamp wet sieving apparatus, soil mineral nitrogen – in 1 N KCL extract using colorimetric method. Results: Clay loam soil physical state (dry bulk density, structure, aggregate stability, water content) depends on tillage system and its combination with additional practices used. Application of cover crop winter mulch without tillage in autumn, ploughless tillage and shallow ploughing causes the compaction of bottom (15-25 cm) topsoil layer. However, due to ploughless tillage the soil dry bulk density in subsoil (25-35 cm) layer is less compared to deep ploughing. Soil structure in the upper (0-15 cm) topsoil layer and in the seedbed (0-5 cm), prepared for spring crops is usually worse when applying the ploughless tillage or cover crop mulch without autumn tillage. Application of lime sludge under ploughless tillage conditions helped to avoid the compaction and structure worsening in upper topsoil layer, as well as increase aggregate stability. Application of reduced tillage increased soil water content at upper topsoil layer directly after spring crop sowing. However, due to reduced tillage the water content in all topsoil markedly decreased when droughty periods lasted for a long time. Combination of reduced tillage with cover crop for green manure and winter mulch is significant for preserving the environment. Such application of cover crops reduces the leaching of mineral nitrogen into the deeper soil layers and environmental pollution. This work was supported by the National Science Program ‘The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems’ [grant number SIT-9/2015] funded by the Research Council of Lithuania.Keywords: clay loam, endocalcaric endogleyic cambisol, mineral nitrogen, physical state
Procedia PDF Downloads 227102 Geovisualization of Human Mobility Patterns in Los Angeles Using Twitter Data
Authors: Linna Li
Abstract:
The capability to move around places is doubtless very important for individuals to maintain good health and social functions. People’s activities in space and time have long been a research topic in behavioral and socio-economic studies, particularly focusing on the highly dynamic urban environment. By analyzing groups of people who share similar activity patterns, many socio-economic and socio-demographic problems and their relationships with individual behavior preferences can be revealed. Los Angeles, known for its large population, ethnic diversity, cultural mixing, and entertainment industry, faces great transportation challenges such as traffic congestion, parking difficulties, and long commuting. Understanding people’s travel behavior and movement patterns in this metropolis sheds light on potential solutions to complex problems regarding urban mobility. This project visualizes people’s trajectories in Greater Los Angeles (L.A.) Area over a period of two months using Twitter data. A Python script was used to collect georeferenced tweets within the Greater L.A. Area including Ventura, San Bernardino, Riverside, Los Angeles, and Orange counties. Information associated with tweets includes text, time, location, and user ID. Information associated with users includes name, the number of followers, etc. Both aggregated and individual activity patterns are demonstrated using various geovisualization techniques. Locations of individual Twitter users were aggregated to create a surface of activity hot spots at different time instants using kernel density estimation, which shows the dynamic flow of people’s movement throughout the metropolis in a twenty-four-hour cycle. In the 3D geovisualization interface, the z-axis indicates time that covers 24 hours, and the x-y plane shows the geographic space of the city. Any two points on the z axis can be selected for displaying activity density surface within a particular time period. In addition, daily trajectories of Twitter users were created using space-time paths that show the continuous movement of individuals throughout the day. When a personal trajectory is overlaid on top of ancillary layers including land use and road networks in 3D visualization, the vivid representation of a realistic view of the urban environment boosts situational awareness of the map reader. A comparison of the same individual’s paths on different days shows some regular patterns on weekdays for some Twitter users, but for some other users, their daily trajectories are more irregular and sporadic. This research makes contributions in two major areas: geovisualization of spatial footprints to understand travel behavior using the big data approach and dynamic representation of activity space in the Greater Los Angeles Area. Unlike traditional travel surveys, social media (e.g., Twitter) provides an inexpensive way of data collection on spatio-temporal footprints. The visualization techniques used in this project are also valuable for analyzing other spatio-temporal data in the exploratory stage, thus leading to informed decisions about generating and testing hypotheses for further investigation. The next step of this research is to separate users into different groups based on gender/ethnic origin and compare their daily trajectory patterns.Keywords: geovisualization, human mobility pattern, Los Angeles, social media
Procedia PDF Downloads 121101 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe
Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira
Abstract:
Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust
Procedia PDF Downloads 267100 Complete Genome Sequence Analysis of Pasteurella multocida Subspecies multocida Serotype A Strain PMTB2.1
Authors: Shagufta Jabeen, Faez J. Firdaus Abdullah, Zunita Zakaria, Nurulfiza M. Isa, Yung C. Tan, Wai Y. Yee, Abdul R. Omar
Abstract:
Pasteurella multocida (PM) is an important veterinary opportunistic pathogen particularly associated with septicemic pasteurellosis, pneumonic pasteurellosis and hemorrhagic septicemia in cattle and buffaloes. P. multocida serotype A has been reported to cause fatal pneumonia and septicemia. Pasteurella multocida subspecies multocida of serotype A Malaysian isolate PMTB2.1 was first isolated from buffaloes died of septicemia. In this study, the genome of P. multocida strain PMTB2.1 was sequenced using third-generation sequencing technology, PacBio RS2 system and analyzed bioinformatically via de novo analysis followed by in-depth analysis based on comparative genomics. Bioinformatics analysis based on de novo assembly of PacBio raw reads generated 3 contigs followed by gap filling of aligned contigs with PCR sequencing, generated a single contiguous circular chromosome with a genomic size of 2,315,138 bp and a GC content of approximately 40.32% (Accession number CP007205). The PMTB2.1 genome comprised of 2,176 protein-coding sequences, 6 rRNA operons and 56 tRNA and 4 ncRNAs sequences. The comparative genome sequence analysis of PMTB2.1 with nine complete genomes which include Actinobacillus pleuropneumoniae, Haemophilus parasuis, Escherichia coli and five P. multocida complete genome sequences including, PM70, PM36950, PMHN06, PM3480, PMHB01 and PMTB2.1 was carried out based on OrthoMCL analysis and Venn diagram. The analysis showed that 282 CDs (13%) are unique to PMTB2.1and 1,125 CDs with orthologs in all. This reflects overall close relationship of these bacteria and supports the classification in the Gamma subdivision of the Proteobacteria. In addition, genomic distance analysis among all nine genomes indicated that PMTB2.1 is closely related with other five Pasteurella species with genomic distance less than 0.13. Synteny analysis shows subtle differences in genetic structures among different P.multocida indicating the dynamics of frequent gene transfer events among different P. multocida strains. However, PM3480 and PM70 exhibited exceptionally large structural variation since they were swine and chicken isolates. Furthermore, genomic structure of PMTB2.1 is more resembling that of PM36950 with a genomic size difference of approximately 34,380 kb (smaller than PM36950) and strain-specific Integrative and Conjugative Elements (ICE) which was found only in PM36950 is absent in PMTB2.1. Meanwhile, two intact prophages sequences of approximately 62 kb were found to be present only in PMTB2.1. One of phage is similar to transposable phage SfMu. The phylogenomic tree was constructed and rooted with E. coli, A. pleuropneumoniae and H. parasuis based on OrthoMCL analysis. The genomes of P. multocida strain PMTB2.1 were clustered with bovine isolates of P. multocida strain PM36950 and PMHB01 and were separated from avian isolate PM70 and swine isolates PM3480 and PMHN06 and are distant from Actinobacillus and Haemophilus. Previous studies based on Single Nucleotide Polymorphism (SNPs) and Multilocus Sequence Typing (MLST) unable to show a clear phylogenetic relatedness between Pasteurella multocida and the different host. In conclusion, this study has provided insight on the genomic structure of PMTB2.1 in terms of potential genes that can function as virulence factors for future study in elucidating the mechanisms behind the ability of the bacteria in causing diseases in susceptible animals.Keywords: comparative genomics, DNA sequencing, phage, phylogenomics
Procedia PDF Downloads 18899 Sensing Study through Resonance Energy and Electron Transfer between Föster Resonance Energy Transfer Pair of Fluorescent Copolymers and Nitro-Compounds
Authors: Vishal Kumar, Soumitra Satapathi
Abstract:
Föster Resonance Energy Transfer (FRET) is a powerful technique used to probe close-range molecular interactions. Physically, the FRET phenomenon manifests as a dipole–dipole interaction between closely juxtaposed fluorescent molecules (10–100 Å). Our effort is to employ this FRET technique to make a prototype device for highly sensitive detection of environment pollutant. Among the most common environmental pollutants, nitroaromatic compounds (NACs) are of particular interest because of their durability and toxicity. That’s why, sensitive and selective detection of small amounts of nitroaromatic explosives, in particular, 2,4,6-trinitrophenol (TNP), 2,4-dinitrotoluene (DNT) and 2,4,6-trinitrotoluene (TNT) has been a critical challenge due to the increasing threat of explosive-based terrorism and the need of environmental monitoring of drinking and waste water. In addition, the excessive utilization of TNP in several other areas such as burn ointment, pesticides, glass and the leather industry resulted in environmental accumulation, and is eventually contaminating the soil and aquatic systems. To the date, high number of elegant methods, including fluorimetry, gas chromatography, mass, ion-mobility and Raman spectrometry have been successfully applied for explosive detection. Among these efforts, fluorescence-quenching methods based on the mechanism of FRET show good assembly flexibility, high selectivity and sensitivity. Here, we report a FRET-based sensor system for the highly selective detection of NACs, such as TNP, DNT and TNT. The sensor system is composed of a copolymer Poly [(N,N-dimethylacrylamide)-co-(Boc-Trp-EMA)] (RP) bearing tryptophan derivative in the side chain as donor and dansyl tagged copolymer P(MMA-co-Dansyl-Ala-HEMA) (DCP) as an acceptor. Initially, the inherent fluorescence of RP copolymer is quenched by non-radiative energy transfer to DCP which only happens once the two molecules are within Förster critical distance (R0). The excellent spectral overlap (Jλ= 6.08×10¹⁴ nm⁴M⁻¹cm⁻¹) between donors’ (RP) emission profile and acceptors’ (DCP) absorption profile makes them an exciting and efficient FRET pair i.e. further confirmed by the high rate of energy transfer from RP to DCP i.e. 0.87 ns⁻¹ and lifetime measurement by time correlated single photon counting (TCSPC) to validate the 64% FRET efficiency. This FRET pair exhibited a specific fluorescence response to NACs such as DNT, TNT and TNP with 5.4, 2.3 and 0.4 µM LODs, respectively. The detection of NACs occurs with high sensitivity by photoluminescence quenching of FRET signal induced by photo-induced electron transfer (PET) from electron-rich FRET pair to electron-deficient NAC molecules. The estimated stern-volmer constant (KSV) values for DNT, TNT and TNP are 6.9 × 10³, 7.0 × 10³ and 1.6 × 104 M⁻¹, respectively. The mechanistic details of molecular interactions are established by time-resolved fluorescence, steady-state fluorescence and absorption spectroscopy confirmed that the sensing process is of mixed type, i.e. both dynamic and static quenching as lifetime of FRET system (0.73 ns) is reduced to 0.55, 0.57 and 0.61 ns DNT, TNT and TNP, respectively. In summary, the simplicity and sensitivity of this novel FRET sensor opens up the possibility of designing optical sensor of various NACs in one single platform for developing multimodal sensor for environmental monitoring and future field based study.Keywords: FRET, nitroaromatic, stern-Volmer constant, tryptophan and dansyl tagged copolymer
Procedia PDF Downloads 13698 Digital Adoption of Sales Support Tools for Farmers: A Technology Organization Environment Framework Analysis
Authors: Sylvie Michel, François Cocula
Abstract:
Digital agriculture is an approach that exploits information and communication technologies. These encompass data acquisition tools like mobile applications, satellites, sensors, connected devices, and smartphones. Additionally, it involves transfer and storage technologies such as 3G/4G coverage, low-bandwidth terrestrial or satellite networks, and cloud-based systems. Furthermore, embedded or remote processing technologies, including drones and robots for process automation, along with high-speed communication networks accessible through supercomputers, are integral components of this approach. While farm-level adoption studies regarding digital agricultural technologies have emerged in recent years, they remain relatively limited in comparison to other agricultural practices. To bridge this gap, this study delves into understanding farmers' intention to adopt digital tools, employing the technology, organization, environment framework. A qualitative research design encompassed semi-structured interviews, totaling fifteen in number, conducted with key stakeholders both prior to and following the 2020-2021 COVID-19 lockdowns in France. Subsequently, the interview transcripts underwent thorough thematic content analysis, and the data and verbatim were triangulated for validation. A coding process aimed to systematically organize the data, ensuring an orderly and structured classification. Our research extends its contribution by delineating sub-dimensions within each primary dimension. A total of nine sub-dimensions were identified, categorized as follows: perceived usefulness for communication, perceived usefulness for productivity, and perceived ease of use constitute the first dimension; technological resources, financial resources, and human capabilities constitute the second dimension, while market pressure, institutional pressure, and the COVID-19 situation constitute the third dimension. Furthermore, this analysis enriches the TOE framework by incorporating entrepreneurial orientation as a moderating variable. Managerial orientation emerges as a pivotal factor influencing adoption intention, with producers acknowledging the significance of utilizing digital sales support tools to combat "greenwashing" and elevate their overall brand image. Specifically, it illustrates that producers recognize the potential of digital tools in time-saving and streamlining sales processes, leading to heightened productivity. Moreover, it highlights that the intent to adopt digital sales support tools is influenced by a market mimicry effect. Additionally, it demonstrates a negative association between the intent to adopt these tools and the pressure exerted by institutional partners. Finally, this research establishes a positive link between the intent to adopt digital sales support tools and economic fluctuations, notably during the COVID-19 pandemic. The adoption of sales support tools in agriculture is a multifaceted challenge encompassing three dimensions and nine sub-dimensions. The research delves into the adoption of digital farming technologies at the farm level through the TOE framework. This analysis provides significant insights beneficial for policymakers, stakeholders, and farmers. These insights are instrumental in making informed decisions to facilitate a successful digital transition in agriculture, effectively addressing sector-specific challenges.Keywords: adoption, digital agriculture, e-commerce, TOE framework
Procedia PDF Downloads 6197 Pathophysiological Implications in Immersion Treatment Methods of Icthyophthiriasis Disease in African Catfish (Clarias gariepinus) Using Moringa oleifera Extract
Authors: Ikele Chika Bright, Mgbenka Bernard Obialo, Ikele Chioma Faith
Abstract:
Icthyophthiriasis is a prevalent protozoan (ectoparasite) mostly affecting cultured and aquarium fishes. The majority of the chemotherapeutants lack efficacy for completely eliminating Ich parasite without affecting the environment and they are not safe for human health. The present work is focused on the evaluating different immersion treatments of African catfish (Clarias gariepinus) infected with ichthyophthiriasis and treated with a non-chemical and environmental friendly parasiticides Moringa oleifera. A total number of 800 apparently healthy parasites free (examined) post juvenile catfish were obtained from a reputable farm, disinfected with potassium permanganate in a quarantine tank to remove any possible external parasites. The fish were further challenged with approximately 44,000 infective stages of theronts which were obtained through serial passages by cohabitation. Seven groups (A-G) of post Juvenile were used for the experiment which was carried out into three stages; Dips (60minutes), short term treatment (24-96h) and prolong bath treatment (0-15 days). The concentrations selected were dependent on the outcome of the LC50 of the plant material from which dose-dependent factors were used to select various concentrations of the treatment. In Dips treatment, group D-G were treated with 1,500mg/L, 2500mg/L., 3500mg/L and 4500mg/L, short-term treatment was treated with 150mg/L, 250mg/L, 350mg/L and 450mg/L and prolong bath was treated with 15mg/L, 25mg/L, 35mg/L and 45mg/L of the plant extract whereas group A, B and C were normal control, Ich- infested not treated and Ich- infested treated with standard drug (Acriflavin), respectively. The various types of treatment applied with corresponding concentrations showed almost complete elimination of the adult parasites (trophonts) both in the gills and the body smear, thereby making M. oleifera a potential parasiticides. There were serious pathological alterations in the skin and gills which are usually the main point for Ich parasites invasion but no significant morphological characteristics was noted among the treated groups subjected to different immersion treatment patterns. Epitheliocystis, aneurysm, oedema, hemorrhage, and localization of the adult parasite in the gills were the overall common observations made in the gills whereas degeneration of muscle fibre, dermatitis, hemorrhage, oedema, abscess formation and keratinisation were observed in the skin. However, there are no pathological changes in the control group. Moreover, biochemical parameters such as urea, creatinine, albumin., globulin, total protein, ALT, AST), blood chemistry (sodium, chloride, potassium, bicarbonate), antioxidants (CAT, SOD, GPx, LPO), enzymatic activities (myeloperoxidase, thioreadoxin reductase), Inflammatory response (C-reactive protein), Stress markers (lactate dehydrogenase), heamatological parameters (RBC, PCV, WBC, HB and differential count), lipid profile (total cholesterol, tryglycerides , high density lipoprotein and low density lipoprotein) all showed various significant (P<0.05) and no significant (P>0.05) responses among the Ich-infested fish treated under three immersion treatments. It is suggested that M. oleifera may serve as an alternatives to chemotherapeutants for control of Ichthyophthiriasis in African catfish Clarias gariepinus.Keywords: Icthyophthirius multifilis, immersion treatment, pathophysiology, African catfish
Procedia PDF Downloads 39196 Application of Satellite Remote Sensing in Support of Water Exploration in the Arab Region
Authors: Eman Ghoneim
Abstract:
The Arabian deserts include some of the driest areas on Earth. Yet, its landforms reserved a record of past wet climates. During humid phases, the desert was green and contained permanent rivers, inland deltas and lakes. Some of their water would have seeped and replenished the groundwater aquifers. When the wet periods came to an end, several thousand years ago, the entire region transformed into an extended band of desert and its original fluvial surface was totally covered by windblown sand. In this work, radar and thermal infrared images were used to reveal numerous hidden surface/subsurface features. Radar long wavelength has the unique ability to penetrate surface dry sands and uncover buried subsurface terrain. Thermal infrared also proven to be capable of spotting cooler moist areas particularly in hot dry surfaces. Integrating Radarsat images and GIS revealed several previously unknown paleoriver and lake basins in the region. One of these systems, known as the Kufrah, is the largest yet identified river basin in the Eastern Sahara. This river basin, which straddles the border between Egypt and Libya, flowed north parallel to the adjacent Nile River with an extensive drainage area of 235,500 km2 and massive valley width of 30 km in some parts. This river was most probably served as a spillway for an overflow from Megalake Chad to the Mediterranean Sea and, thus, may have acted as a natural water corridor used by human ancestors to migrate northward across the Sahara. The Gilf-Kebir is another large paleoriver system located just east of Kufrah and emanates from the Gilf Plateau in Egypt. Both river systems terminate with vast inland deltas at the southern margin of the Great Sand Sea. The trends of their distributary channels indicate that both rivers drained to a topographic depression that was periodically occupied by a massive lake. During dry climates, the lake dried up and roofed by sand deposits, which is today forming the Great Sand Sea. The enormity of the lake basin provides explanation as to why continuous extraction of groundwater in this area is possible. A similar lake basin, delimited by former shorelines, was detected by radar space data just across the border of Sudan. This lake, called the Northern Darfur Megalake, has a massive size of 30,750 km2. These former lakes and rivers could potentially hold vast reservoirs of groundwater, oil and natural gas at depth. Similar to radar data, thermal infrared images were proven to be useful in detecting potential locations of subsurface water accumulation in desert regions. Analysis of both Aster and daily MODIS thermal channels reveal several subsurface cool moist patches in the sandy desert of the Arabian Peninsula. Analysis indicated that such evaporative cooling anomalies were resulted from the subsurface transmission of the Monsoonal rainfall from the mountains to the adjacent plain. Drilling a number of wells in several locations proved the presence of productive water aquifers confirming the validity of the used data and the adopted approaches for water exploration in dry regions.Keywords: radarsat, SRTM, MODIS, thermal infrared, near-surface water, ancient rivers, desert, Sahara, Arabian peninsula
Procedia PDF Downloads 24795 Human Wildlife Conflict Outside Protected Areas of Nepal: Causes, Consequences and Mitigation Strategies
Authors: Kedar Baral
Abstract:
This study was carried out in Mustang, Kaski, Tanahun, Baitadi, and Jhapa districts of Nepal. The study explored the spatial and temporal pattern of HWC, socio economic factors associated with it, impacts of conflict on life / livelihood of people and survival of wildlife species, and impact of climate change and forest fire onHWC. Study also evaluated people’s attitude towards wildlife conservation and assessed relevant policies and programs. Questionnaire survey was carried out with the 250 respondents, and both socio-demographic and HWC related information werecollected. Secondary information were collected from Divisional Forest Offices and Annapurna Conservation Area Project.HWC events were grouped by season /months/sites (forest type, distances from forest, and settlement), and the coordinates of the events were exported to ArcGIS. Collected data were analyzed using descriptive statistics in Excel and R Program. A total of 1465 events were recorded in 5 districts during 2015 and 2019. Out of that, livestock killing, crop damage, human attack, and cattle shed damage events were 70 %, 12%, 11%, and 7%, respectively. Among 151 human attack cases, 23 people were killed, and 128 were injured. Elephant in Terai, common leopard and monkey in Middle Mountain, and snow leopard in high mountains were found as major problematic animals. Common leopard attacks were found more in the autumn, evening, and on human settlement area. Whereas elephant attacks were found higher in winter, day time, and on farmland. Poor people farmers were found highly victimized, and they were losing 26% of their income due to crop raiding and livestock depredation. On the other hand, people are killing many wildlife in revenge, and this number is increasing every year. Based on the people's perception, climate change is causing increased temperature and forest fire events and decreased water sources within the forest. Due to the scarcity of food and water within forests, wildlife are compelled to dwell at human settlement area, hence HWC events are increasing. Nevertheless, more than half of the respondents were found positive about conserving entire wildlife species. Forests outside PAs are under the community forestry (CF) system, which restored the forest, improved the habitat, and increased the wildlife.However, CF policies and programs were found to be more focused on forest management with least priority on wildlife conservation and HWC mitigation. Compensation / relief scheme of government for wildlife damage was found some how effective to manage HWC, but the lengthy process, being applicable to the damage of few wildlife species and highly increasing events made it necessary to revisit. Based on these facts, the study suggest to carry out awareness generation activities to the poor farmers, linking the property of people with the insurance scheme, conducting habitat management activities within CF, promoting the unpalatable crops, improvement of shed house of livestock, simplifying compensation scheme and establishing a fund at the district level and incorporating the wildlife conservation and HWCmitigation programs in CF. Finally, the study suggests to carry out rigorous researches to understand the impacts of current forest management practices on forest, biodiversity, wildlife, and HWC.Keywords: community forest, conflict mitigation, wildlife conservation, climate change
Procedia PDF Downloads 11794 Expanded Polyurethane Foams and Waterborne-Polyurethanes from Vegetable Oils
Authors: A.Cifarelli, L. Boggioni, F. Bertini, L. Magon, M. Pitalieri, S. Losio
Abstract:
Nowadays, the growing environmental awareness and the dwindling of fossil resources stimulate the polyurethane (PU) industry towards renewable polymers with low carbon footprint to replace the feed stocks from petroleum sources. The main challenge in this field consists in replacing high-performance products from fossil-fuel with novel synthetic polymers derived from 'green monomers'. The bio-polyols from plant oils have attracted significant industrial interest and major attention in scientific research due to their availability and biodegradability. Triglycerides rich in unsaturated fatty acids, such as soybean oil (SBO) and linseed oil (ELO), are particularly interesting because their structures and functionalities are tunable by chemical modification in order to obtain polymeric materials with expected final properties. Unfortunately, their use is still limited for processing or performance problems because a high functionality, as well as OH number of the polyols will result in an increase in cross-linking densities of the resulting PUs. The main aim of this study is to evaluate soy and linseed-based polyols as precursors to prepare prepolymers for the production of polyurethane foams (PUFs) or waterborne-polyurethanes (WPU) used as coatings. An effective reaction route is employed for its simplicity and economic impact. Indeed, bio-polyols were synthesized by a two-step method: epoxidation of the double bonds in vegetable oils and solvent-free ring-opening reaction of the oxirane with organic acids. No organic solvents have been used. Acids with different moieties (aliphatic or aromatics) and different length of hydrocarbon backbones can be used to customize polyols with different functionalities. The ring-opening reaction requires a fine tuning of the experimental conditions (time, temperature, molar ratio of carboxylic acid and epoxy group) to control the acidity value of end-product as well as the amount of residual starting materials. Besides, a Lewis base catalyst is used to favor the ring opening reaction of internal epoxy groups of the epoxidized oil and minimize the formation of cross-linked structures in order to achieve less viscous and more processable polyols with narrower polydispersity indices (molecular weight lower than 2000 g/mol⁻¹). The functionality of optimized polyols is tuned from 2 to 4 per molecule. The obtained polyols are characterized by means of GPC, NMR (¹H, ¹³C) and FT-IR spectroscopy to evaluate molecular masses, molecular mass distributions, microstructures and linkage pathways. Several polyurethane foams have been prepared by prepolymer method blending conventional synthetic polyols with new bio-polyols from soybean and linseed oils without using organic solvents. The compatibility of such bio-polyols with commercial polyols and diisocyanates is demonstrated. The influence of the bio-polyols on the foam morphology (cellular structure, interconnectivity), density, mechanical and thermal properties has been studied. Moreover, bio-based WPUs have been synthesized by well-established processing technology. In this synthesis, a portion of commercial polyols is substituted by the new bio-polyols and the properties of the coatings on leather substrates have been evaluated to determine coating hardness, abrasion resistance, impact resistance, gloss, chemical resistance, flammability, durability, and adhesive strength.Keywords: bio-polyols, polyurethane foams, solvent free synthesis, waterborne-polyurethanes
Procedia PDF Downloads 13293 Thermally Conductive Polymer Nanocomposites Based on Graphene-Related Materials
Authors: Alberto Fina, Samuele Colonna, Maria del Mar Bernal, Orietta Monticelli, Mauro Tortello, Renato Gonnelli, Julio Gomez, Chiara Novara, Guido Saracco
Abstract:
Thermally conductive polymer nanocomposites are of high interest for several applications including low-temperature heat recovery, heat exchangers in a corrosive environment and heat management in electronics and flexible electronics. In this paper, the preparation of thermally conductive nanocomposites exploiting graphene-related materials is addressed, along with their thermal characterization. In particular, correlations between 1- chemical and physical features of the nanoflakes and 2- processing conditions with the heat conduction properties of nanocomposites is studied. Polymers are heat insulators; therefore, the inclusion of conductive particles is the typical solution to obtain a sufficient thermal conductivity. In addition to traditional microparticles such as graphite and ceramics, several nanoparticles have been proposed, including carbon nanotubes and graphene, for the use in polymer nanocomposites. Indeed, thermal conductivities for both carbon nanotubes and graphenes were reported in the wide range of about 1500 to 6000 W/mK, despite such property may decrease dramatically as a function of the size, number of layers, the density of topological defects, re-hybridization defects as well as on the presence of impurities. Different synthetic techniques have been developed, including mechanical cleavage of graphite, epitaxial growth on SiC, chemical vapor deposition, and liquid phase exfoliation. However, the industrial scale-up of graphene, defined as an individual, single-atom-thick sheet of hexagonally arranged sp2-bonded carbons still remains very challenging. For large scale bulk applications in polymer nanocomposites, some graphene-related materials such as multilayer graphenes (MLG), reduced graphene oxide (rGO) or graphite nanoplatelets (GNP) are currently the most interesting graphene-based materials. In this paper, different types of graphene-related materials were characterized for their chemical/physical as well as for thermal properties of individual flakes. Two selected rGOs were annealed at 1700°C in vacuum for 1 h to reduce defectiveness of the carbon structure. Thermal conductivity increase of individual GNP with annealing was assessed via scanning thermal microscopy. Graphene nano papers were prepared from both conventional RGO and annealed RGO flakes. Characterization of the nanopapers evidenced a five-fold increase in the thermal diffusivity on the nano paper plane for annealed nanoflakes, compared to pristine ones, demonstrating the importance of structural defectiveness reduction to maximize the heat dissipation performance. Both pristine and annealed RGO were used to prepare polymer nanocomposites, by melt reactive extrusion. Thermal conductivity showed two- to three-fold increase in the thermal conductivity of the nanocomposite was observed for high temperature treated RGO compared to untreated RGO, evidencing the importance of using low defectivity nanoflakes. Furthermore, the study of different processing paremeters (time, temperature, shear rate) during the preparation of poly (butylene terephthalate) nanocomposites evidenced a clear correlation with the dispersion and fragmentation of the GNP nanoflakes; which in turn affected the thermal conductivity performance. Thermal conductivity of about 1.7 W/mK, i.e. one order of magnitude higher than for pristine polymer, was obtained with 10%wt of annealed GNPs, which is in line with state of the art nanocomposites prepared by more complex and less upscalable in situ polymerization processes.Keywords: graphene, graphene-related materials, scanning thermal microscopy, thermally conductive polymer nanocomposites
Procedia PDF Downloads 26892 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation
Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi
Abstract:
The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa
Procedia PDF Downloads 16291 Effective Emergency Response and Disaster Prevention: A Decision Support System for Urban Critical Infrastructure Management
Authors: M. Shahab Uddin, Pennung Warnitchai
Abstract:
Currently more than half of the world’s populations are living in cities, and the number and sizes of cities are growing faster than ever. Cities rely on the effective functioning of complex and interdependent critical infrastructures networks to provide public services, enhance the quality of life, and save the community from hazards and disasters. In contrast, complex connectivity and interdependency among the urban critical infrastructures bring management challenges and make the urban system prone to the domino effect. Unplanned rapid growth, increased connectivity, and interdependency among the infrastructures, resource scarcity, and many other socio-political factors are affecting the typical state of an urban system and making it susceptible to numerous sorts of diversion. In addition to internal vulnerabilities, urban systems are consistently facing external threats from natural and manmade hazards. Cities are not just complex, interdependent system, but also makeup hubs of the economy, politics, culture, education, etc. For survival and sustainability, complex urban systems in the current world need to manage their vulnerabilities and hazardous incidents more wisely and more interactively. Coordinated management in such systems makes for huge potential when it comes to absorbing negative effects in case some of its components were to function improperly. On the other hand, ineffective management during a similar situation of overall disorder from hazards devastation may make the system more fragile and push the system to an ultimate collapse. Following the quantum, the current research hypothesizes that a hazardous event starts its journey as an emergency, and the system’s internal vulnerability and response capacity determine its destination. Connectivity and interdependency among the urban critical infrastructures during this stage may transform its vulnerabilities into dynamic damaging force. An emergency may turn into a disaster in the absence of effective management; similarly, mismanagement or lack of management may lead the situation towards a catastrophe. Situation awareness and factual decision-making is the key to win a battle. The current research proposed a contextual decision support system for an urban critical infrastructure system while integrating three different models: 1) Damage cascade model which demonstrates damage propagation among the infrastructures through their connectivity and interdependency, 2) Restoration model, a dynamic restoration process of individual infrastructure, which is based on facility damage state and overall disruptions in surrounding support environment, and 3) Optimization model that ensures optimized utilization and distribution of available resources in and among the facilities. All three models are tightly connected, mutually interdependent, and together can assess the situation and forecast the dynamic outputs of every input. Moreover, this integrated model will hold disaster managers and decision makers responsible when it comes to checking all the alternative decision before any implementation, and support to produce maximum possible outputs from the available limited inputs. This proposed model will not only support to reduce the extent of damage cascade but will ensure priority restoration and optimize resource utilization through adaptive and collaborative management. Complex systems predictably fail but in unpredictable ways. System understanding, situation awareness, and factual decisions may significantly help urban system to survive and sustain.Keywords: disaster prevention, decision support system, emergency response, urban critical infrastructure system
Procedia PDF Downloads 22890 Cognitive Decline in People Living with HIV in India and Correlation with Neurometabolites Using 3T Magnetic Resonance Spectroscopy (MRS): A Cross-Sectional Study
Authors: Kartik Gupta, Virendra Kumar, Sanjeev Sinha, N. Jagannathan
Abstract:
Introduction: A significant number of patients having human immunodeficiency virus (HIV) infection show a neurocognitive decline (NCD) ranging from minor cognitive impairment to severe dementia. The possible causes of NCD in HIV-infected patients include brain injury by HIV before cART, neurotoxic viral proteins and metabolic abnormalities. In the present study, we compared the level of NCD in asymptomatic HIV-infected patients with changes in brain metabolites measured by using magnetic resonance spectroscopy (MRS). Methods: 43 HIV-positive patients (30 males and 13 females) coming to ART center of the hospital and HIV-seronegative healthy subjects were recruited for the study. All the participants completed MRI and MRS examination, detailed clinical assessments and a battery of neuropsychological tests. All the MR investigations were carried out at 3.0T MRI scanner (Ingenia/Achieva, Philips, Netherlands). MRI examination protocol included the acquisition of T2-weighted imaging in axial, coronal and sagittal planes, T1-weighted, FLAIR, and DWI images in the axial plane. Patients who showed any apparent lesion on MRI were excluded from the study. T2-weighted images in three orthogonal planes were used to localize the voxel in left frontal lobe white matter (FWM) and left basal ganglia (BG) for single voxel MRS. Single voxel MRS spectra were acquired with a point resolved spectroscopy (PRESS) localization pulse sequence at an echo time (TE) of 35 ms and a repetition time (TR) of 2000 ms with 64 or 128 scans. Automated preprocessing and determination of absolute concentrations of metabolites were estimated using LCModel by water scaling method and the Cramer-Rao lower bounds for all metabolites analyzed in the study were below 15\%. Levels of total N-acetyl aspartate (tNAA), total choline (tCho), glutamate + glutamine (Glx), total creatine (tCr), were measured. Cognition was tested using a battery of tests validated for Indian population. The cognitive domains tested were the memory, attention-information processing, abstraction-executive, simple and complex perceptual motor skills. Z-scores normalized according to age, sex and education standard were used to calculate dysfunction in these individual domains. The NCD was defined as dysfunction with Z-score ≤ 2 in at least two domains. One-way ANOVA was used to compare the difference in brain metabolites between the patients and healthy subjects. Results: NCD was found in 23 (53%) patients. There was no significant difference in age, CD4 count and viral load between the two groups. Maximum impairment was found in the domains of memory and simple motor skills i.e., 19/43 (44%). The prevalence of deficit in attention-information processing, complex perceptual motor skills and abstraction-executive function was 37%, 35%, 33% respectively. Subjects with NCD had a higher level of Glutamate in the Frontal region (8.03 ± 2.30 v/s. 10.26 ± 5.24, p-value 0.001). Conclusion: Among newly diagnosed, ART-naïve retroviral disease patients from India, cognitive decline was found in 53\% patients using tests validated for this population. Those with neurocognitive decline had a significantly higher level of Glutamate in the left frontal region. There was no significant difference in age, CD4 count and viral load at initiation of ART between the two groups.Keywords: HIV, neurocognitive decline, neurometabolites, magnetic resonance spectroscopy
Procedia PDF Downloads 21489 A Fresh Approach to Learn Evidence-Based Practice, a Prospective Interventional Study
Authors: Ebtehal Qulisy, Geoffrey Dougherty, Kholoud Hothan, Mylene Dandavino
Abstract:
Background: For more than 200 years, journal clubs (JCs) have been used to teach the fundamentals of critical appraisal and evidence-based practice (EBP). However, JCs curricula face important challenges, including poor sustainability, insufficient time to prepare for and conduct the activities, and lack of trainee skills and self-efficacy with critical appraisal. Andragogy principles and modern technology could help EBP be taught in more relevant, modern, and interactive ways. Method: We propose a fresh educational activity to teach EBP. Educational sessions are designed to encourage collaborative and experiential learning and do not require advanced preparation by the participants. Each session lasts 60 minutes and is adaptable to in-person, virtual, or hybrid contexts. Sessions are structured around a worksheet and include three educational objectives: “1. Identify a Clinical Conundrum”, “2. Compare and Contrast Current Guidelines”, and “3. Choose a Recent Journal Article”. Sessions begin with a short presentation by a facilitator of a clinical scenario highlighting a “grey-zone” in pediatrics. Trainees are placed in groups of two to four (based on the participants’ number) of varied training levels. The first task requires the identification of a clinical conundrum (a situation where there is no clear answer but only a reasonable solution) related to the scenario. For the second task, trainees must identify two or three clinical guidelines. The last task requires trainees to find a journal article published in the last year that reports an update regarding the scenario’s topic. Participants are allowed to use their electronic devices throughout the session. Our university provides full-text access to major journals, which facilitated this exercise. Results: Participants were a convenience sample of trainees in the inpatient services at the Montréal Children’s Hospital, McGill University. Sessions were conducted as a part of an existing weekly academic activity and facilitated by pediatricians with experience in critical appraisal. There were 28 participants in 4 sessions held during Spring 2022. Time was allocated at the end of each session to collect participants’ feedback via a self-administered online survey. There were 22 responses, were 41%(n=9) pediatric residents, 22.7%(n=5) family medicine residents, 31.8%(n=7) medical students, and 4.5%(n=1) nurse practitioner. Four respondents participated in more than one session. The “Satisfied” rates were 94.7% for session format, 100% for topic selection, 89.5% for time allocation, and 84.3% for worksheet structure. 60% of participants felt that including the sessions during the clinical ward rotation was “Feasible.” As per self-efficacy, participants reported being “Confident” for the tasks as follows: 89.5% for the ability to identify a relevant conundrum, 94.8% for the compare and contrast task, and 84.2% for the identification of a published update. The perceived effectiveness to learn EBP was reported as “Agreed” by all participants. All participants would recommend this session for further teaching. Conclusion: We developed a modern approach to teach EBP, enjoyed by all levels of participants, who also felt it was a useful learning experience. Our approach addresses known JCs challenges by being relevant to clinical care, fostering active engagement but not requiring any preparation, using available technology, and being adaptable to hybrid contexts.Keywords: medical education, journal clubs, post-graduate teaching, andragogy, experiential learning, evidence-based practice
Procedia PDF Downloads 11688 Optimizing Productivity and Quality through the Establishment of a Learning Management System for an Agency-Based Graduate School
Authors: Maria Corazon Tapang-Lopez, Alyn Joy Dela Cruz Baltazar, Bobby Jones Villanueva Domdom
Abstract:
The requisite for an organization implementing quality management system to sustain its compliance to the requirements and commitment for continuous improvement is even higher. It is expected that the offices and units has high and consistent compliance to the established processes and procedures. The Development Academy of the Philippines has been operating under project management to which is has a quality management certification. To further realize its mandate as a think-tank and capacity builder of the government, DAP expanded its operation and started to grant graduate degree through its Graduate School of Public and Development Management (GSPDM). As the academic arm of the Academy, GSPDM offers graduate degree programs on public management and productivity & quality aligned to the institutional trusts. For a time, the documented procedures and processes of a project management seem to fit the Graduate School. However, there has been a significant growth in the operations of the GSPDM in terms of the graduate programs offered that directly increase the number of students. There is an apparent necessity to align the project management system into a more educational system otherwise it will no longer be responsive to the development that are taking place. The strongly advocate and encourage its students to pursue internal and external improvement to cope up with the challenges of providing quality service to their own clients and to our country. If innovation will not take roots in the grounds of GSPDM, then how will it serve the purpose of “walking the talk”? This research was conducted to assess the diverse flow of the existing internal operations and processes of the DAP’s project management and GSPDM’s school management that will serve as basis to develop a system that will harmonize into one, the Learning Management System. The study documented the existing process of GSPDM following the project management phases of conceptualization & development, negotiation & contracting, mobilization, implementation, and closure into different flow charts of the key activities. The primary source of information as respondents were the different groups involved into the delivery of graduate programs - the executive, learning management team and administrative support offices. The Learning Management System (LMS) shall capture the unique and critical processes of the GSPDM as a degree-granting unit of the Academy. The LMS is the harmonized project management and school management system that shall serve as the standard system and procedure for all the programs within the GSPDM. The unique processes cover the three important areas of school management – student, curriculum, and faculty. The required processes of these main areas such as enrolment, course syllabus development, and faculty evaluation were appropriately placed within the phases of the project management system. Further, the research shall identify critical reports and generate manageable documents and records to ensure accuracy, consistency and reliable information. The researchers had an in-depth review of the DAP-GSDPM’s mandate, analyze the various documents, and conducted series of focused group discussions. A comprehensive review on flow chart system prior and various models of school management systems were made. Subsequently, the final output of the research is a work instructions manual that will be presented to the Academy’s Quality Management Council and eventually an additional scope for ISO certification. The manual shall include documented forms, iterative flow charts and program Gantt chart that will have a parallel development of automated systems.Keywords: productivity, quality, learning management system, agency-based graduate school
Procedia PDF Downloads 321