Search results for: distant named entity recognition
571 Understanding the Common Antibiotic and Heavy Metal Resistant-Bacterial Load in the Textile Industrial Effluents
Authors: Afroza Parvin, Md. Mahmudul Hasan, Md. Rokunozzaman, Papon Debnath
Abstract:
The effluents of textile industries have considerable amounts of heavy metals, causing potential microbial metal loads if discharged into the environment without treatment. Aim: In this present study, both lactose and non-lactose fermenting bacterial isolates were isolated from textile industrial effluents of a specific region of Bangladesh, named Savar, to compare and understand the load of heavy metals in these microorganisms determining the effects of heavy metal resistance properties on antibiotic resistance. Methods: Five different textile industrial canals of Savar were selected, and effluent samples were collected in 2016 between June to August. Total bacterial colony (TBC) was counted for day 1 to day 5 for 10-6 dilution of samples to 10-10 dilution. All the isolates were isolated and selected using 4 differential media, and tested for the determination of minimum inhibitory concentration (MIC) of heavy metals and antibiotic susceptibility test with plate assay method and modified Kirby-Bauer disc diffusion method, respectively. To detect the combined effect of heavy metals and antibiotics, a binary exposure experiment was performed, and to understand the plasmid profiling plasmid DNA was extracted by alkaline lysis method of some selective isolates. Results: Most of the cases, the colony forming units (CFU) per plate for 50 ul diluted sample were uncountable at 10-6 dilution, however, countable for 10-10 dilution and it didn’t vary much from canal to canal. A total of 50 Shigella, 50 Salmonella, and 100 E.coli (Escherichia coli) like bacterial isolates were selected for this study where the MIC was less than or equal to 0.6 mM for 100% Shigella and Salmonella like isolates, however, only 3% E. coli like isolates had the same MIC for nickel (Ni). The MIC for chromium (Cr) was less than or equal to 2.0 mM for 16% Shigella, 20% Salmonella, and 17% E. coli like isolates. Around 60% of both Shigella and Salmonella, but only 20% of E.coli like isolates had a MIC of less than or equal to 1.2 mM for lead (Pb). The most prevalent resistant pattern for azithromycin (AZM) for Shigella and Salmonella like isolates was found 38% and 48%, respectively; however, for E.coli like isolates, the highest pattern (36%) was found for sulfamethoxazole-trimethoprim (SXT). In the binary exposure experiment, antibiotic zone of inhibition was mostly increased in the presence of heavy metals for all types of isolates. The highest sized plasmid was found 21 Kb and 14 Kb for lactose and non-lactose fermenting isolates, respectively. Conclusion: Microbial resistance to antibiotics and metal ions, has potential health hazards because these traits are generally associated with transmissible plasmids. Microorganisms resistant to antibiotics and tolerant to metals appear as a result of exposure to metal-contaminated environments.Keywords: antibiotics, effluents, heavy metals, minimum inhibitory concentration, resistance
Procedia PDF Downloads 315570 Is Audit Quality Implied by Accruals Quality Associated with Audit Fees and Auditor Tenure? Evidence from China
Authors: Hassan Y. Kikhia, Jin P. Zhang, Khaldoon G. Albiatr
Abstract:
The Enron and Arthur Andersen scandal has raised concerns internationally about auditor independence and audit quality. Furthermore, the debate continues about the relationship between audit fees, auditor tenure and audit quality in spite of extensive empirical evidence examining audit failures and earnings management. Therefore, the purpose of current research is to determine the effect of audit fee and audit tenure both partially and simultaneously on the audit quality. Using a sample of Chinese firms, an environment where we believe it provides us with an opportunity to test whether the development of market and legal institutions affects the impact of audit fees and auditor tenure on audit quality. We employ the standard deviation of residuals from regressions relating current accruals to cash flows as proxy for audit quality. The paper documents statistically significant negative association between audit fees and audit quality. These findings are consistent with economic bonding being a determinant of auditor behavior rather than auditor reputational concerns. Further, the current paper shows a positive association between auditor tenure and audit quality in the earlier years of audit tenure. These results support the proposition that when the Learning Effect dominates the Bonding Effect in the earlier years of tenure, then audit quality is likely to be higher. Taken audit fees and audit tenure together, the results suggest that there is positive association between audit fees and audit quality in the earlier years of auditor tenure. Interestingly, the findings of our study have important implications for auditors, policymakers, multinational firms, and users of financial reports. As the rapid growth of China's economy gains global recognition, the Chinese stock market is capturing the attention of international investors. To a lesser extent, our paper also differs from the prior studies in methodology and findings in the investigation of audit quality.Keywords: audit quality, accruals quality, audit fees, auditor tenure
Procedia PDF Downloads 280569 Greenhouse Controlled with Graphical Plotting in Matlab
Authors: Bruno R. A. Oliveira, Italo V. V. Braga, Jonas P. Reges, Luiz P. O. Santos, Sidney C. Duarte, Emilson R. R. Melo, Auzuir R. Alexandria
Abstract:
This project aims to building a controlled greenhouse, or for better understanding, a structure where one can maintain a given range of temperature values (°C) coming from radiation emitted by an incandescent light, as previously defined, characterizing as a kind of on-off control and a differential, which is the plotting of temperature versus time graphs assisted by MATLAB software via serial communication. That way it is possible to connect the stove with a computer and monitor parameters. In the control, it was performed using a PIC 16F877A microprocessor which enabled convert analog signals to digital, perform serial communication with the IC MAX232 and enable signal transistors. The language used in the PIC's management is Basic. There are also a cooling system realized by two coolers 12V distributed in lateral structure, being used for venting and the other for exhaust air. To find out existing temperature inside is used LM35DZ sensor. Other mechanism used in the greenhouse construction was comprised of a reed switch and a magnet; their function is in recognition of the door position where a signal is sent to a buzzer when the door is open. Beyond it exist LEDs that help to identify the operation which the stove is located. To facilitate human-machine communication is employed an LCD display that tells real-time temperature and other information. The average range of design operating without any major problems, taking into account the limitations of the construction material and structure of electrical current conduction, is approximately 65 to 70 ° C. The project is efficient in these conditions, that is, when you wish to get information from a given material to be tested at temperatures not as high. With the implementation of the greenhouse automation, facilitating the temperature control and the development of a structure that encourages correct environment for the most diverse applications.Keywords: greenhouse, microcontroller, temperature, control, MATLAB
Procedia PDF Downloads 402568 Auditory and Visual Perceptual Category Learning in Adults with ADHD: Implications for Learning Systems and Domain-General Factors
Authors: Yafit Gabay
Abstract:
Attention deficit hyperactivity disorder (ADHD) has been associated with both suboptimal functioning in the striatum and prefrontal cortex. Such abnormalities may impede the acquisition of perceptual categories, which are important for fundamental abilities such as object recognition and speech perception. Indeed, prior research has supported this possibility, demonstrating that children with ADHD have similar visual category learning performance as their neurotypical peers but use suboptimal learning strategies. However, much less is known about category learning processes in the auditory domain or among adults with ADHD in which prefrontal functions are more mature compared to children. Here, we investigated auditory and visual perceptual category learning in adults with ADHD and neurotypical individuals. Specifically, we examined learning of rule-based categories – presumed to be optimally learned by a frontal cortex-mediated hypothesis testing – and information-integration categories – hypothesized to be optimally learned by a striatally-mediated reinforcement learning system. Consistent with striatal and prefrontal cortical impairments observed in ADHD, our results show that across sensory modalities, both rule-based and information-integration category learning is impaired in adults with ADHD. Computational modeling analyses revealed that individuals with ADHD were slower to shift to optimal strategies than neurotypicals, regardless of category type or modality. Taken together, these results suggest that both explicit, frontally mediated and implicit, striatally mediated category learning are impaired in ADHD. These results suggest impairments across multiple learning systems in young adults with ADHD that extend across sensory modalities and likely arise from domain-general mechanisms.Keywords: ADHD, category learning, modality, computational modeling
Procedia PDF Downloads 47567 Uniqueness of Fingerprint Biometrics to Human Dynasty: A Review
Authors: Siddharatha Sharma
Abstract:
With the advent of technology and machines, the role of biometrics in society is taking an important place for secured living. Security issues are the major concern in today’s world and continue to grow in intensity and complexity. Biometrics based recognition, which involves precise measurement of the characteristics of living beings, is not a new method. Fingerprints are being used for several years by law enforcement and forensic agencies to identify the culprits and apprehend them. Biometrics is based on four basic principles i.e. (i) uniqueness, (ii) accuracy, (iii) permanency and (iv) peculiarity. In today’s world fingerprints are the most popular and unique biometrics method claiming a social benefit in the government sponsored programs. A remarkable example of the same is UIDAI (Unique Identification Authority of India) in India. In case of fingerprint biometrics the matching accuracy is very high. It has been observed empirically that even the identical twins also do not have similar prints. With the passage of time there has been an immense progress in the techniques of sensing computational speed, operating environment and the storage capabilities and it has become more user convenient. Only a small fraction of the population may be unsuitable for automatic identification because of genetic factors, aging, environmental or occupational reasons for example workers who have cuts and bruises on their hands which keep fingerprints changing. Fingerprints are limited to human beings only because of the presence of volar skin with corrugated ridges which are unique to this species. Fingerprint biometrics has proved to be a high level authentication system for identification of the human beings. Though it has limitations, for example it may be inefficient and ineffective if ridges of finger(s) or palm are moist authentication becomes difficult. This paper would focus on uniqueness of fingerprints to the human beings in comparison to other living beings and review the advancement in emerging technologies and their limitations.Keywords: fingerprinting, biometrics, human beings, authentication
Procedia PDF Downloads 325566 Family Models in Contemporary Multicultural Society: Exploratory Study Applied to Immigrants of Second and Third Generations
Authors: Danièle Peto
Abstract:
A qualitative research based on twenty-eight semi-structured interviews of students in Social Work, in Brussels (Belgium), showed specific results for the Arab and Muslim students: second and third generations immigrants build their identity on the basis of a mix of differentiation with and recognition of their parents' culture of origin. Building a bridge between Modernity and Tradition, they claim active citizenship; at the same time they show and live by values and religious believes which reinforce the link to their parents’ origins. But they present those values and believes as their own rational choices among other choices, all available and rich for our multicultural society. The way they speak of themselves is highly modern. But, they still have to build a third way to find a place for themselves in society: one allowing them to live their religion as a partially public matter (when the Occidental society leaves no such place for religion) while ensuring, at the same time, the development of independent critical thought. On this basis, other semi-structured interviews are being laid with Social workers working with families from diverse ethnic backgrounds. They will verify the reality of those identity and cultural bricolages when those young adults of second and third generations build their own family. In between the theoretical models of traditional family and modern family, shall we find a new model, hybrid and more or less stable, combining some aspects of the former and the latter? The exploratory research phase focuses on three aspects of building a family life in this context : the way those generations play, discursively or not, in between their parents and the society in which they grew up; the importance of intercultural dialogue in this process of building; and testing the hypothesis that some families, in our society, show a special way of courting Modernity.Keywords: family models, identity bricolages, intercultural, modernity and tradition
Procedia PDF Downloads 301565 The Healing 'Touch' of Music: A Neuro-Acoustics Approach to Understand Its Therapeutic Effect
Authors: Jagmeet S. Kanwal, Julia F. Langley
Abstract:
Music can heal the body, but a mechanistic understanding of this phenomenon is lacking. This study explores the effects of music presentation on neurologic and physiologic responses leading to metabolic changes in the human body. The mind and body co-exist in a corporeal entity and within this framework, sickness ensues when the mind-body balance goes awry. It is further hypothesized that music has the capacity to directly reset this balance. Two lines of inquiry taken together can provide a mechanistic understanding of this phenomenon 1) Empirical evidence for a sound-sensitive pressure sensor system in the body, and 2) The notion of a “healing center” within the brain that is activated by specific patterns of sounds. From an acoustics perspective, music is spatially distributed as pressure waves ranging from a few cm to several meters in wavelength. These waves interact and propagate in three-dimensions in unique ways, depending on the wavelength. Furthermore, music creates dynamically changing wave-fronts. Frequencies between 200 Hz and 1 kHz generate wavelengths that range from 5'6" to 1 foot. These dimensions are in the range of the body size of most people making it plausible that these pressure waves can geometrically interact with the body surface and create distinct patterns of pressure stimulation across the skin surface. For humans, short wavelength, high frequency (> 200 Hz) sounds are best received via cochlear receptors. For low frequency (< 200 Hz), long wavelength sound vibrations, however, the whole body may act as an ideal receiver. A vast array of highly sensitive pressure receptors (Pacinian corpuscles) is present just beneath the skin surface, as well as in the tendons, bones, several organs in the abdomen, and the sexual organs. Per the available empirical evidence, these receptors contribute to music perception by allowing the whole body to function as a sound receiver, and knowledge of how they function is essential to fully understanding the therapeutic effect of music. Neuroscientific studies have established that music stimulates the limbic system that can trigger states of anxiety, arousal, fear, and other emotions. These emotional states of brain activity play a crucial role in filtering top-down feedback from thoughts and bottom-up sensory inputs to the autonomic system, which automatically regulates bodily functions. Music likely exerts its pleasurable and healing effects by enhancing functional and effective connectivity and feedback mechanisms between brain regions that mediate reward, autonomic, and cognitive processing. Stimulation of pressure receptors under the skin by low-frequency music-induced sensations can activate multiple centers in the brain, including the amygdala, the cingulate cortex, and nucleus accumbens. Melodies in music in the low (< 600 Hz) frequency range may augment auditory inputs after convergence of the pressure-sensitive inputs from the vagus nerve onto emotive processing regions within the limbic system. The integration of music-generated auditory and somato-visceral inputs may lead to a synergistic input to the brain that promotes healing. Thus, music can literally heal humans through “touch” as it energizes the brain’s autonomic system for restoring homeostasis.Keywords: acoustics, brain, music healing, pressure receptors
Procedia PDF Downloads 166564 Lightweight and Seamless Distributed Scheme for the Smart Home
Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro
Abstract:
Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.Keywords: authentication, key-session, security, wireless sensors
Procedia PDF Downloads 318563 A Multivariate Statistical Approach for Water Quality Assessment of River Hindon, India
Authors: Nida Rizvi, Deeksha Katyal, Varun Joshi
Abstract:
River Hindon is an important river catering the demand of highly populated rural and industrial cluster of western Uttar Pradesh, India. Water quality of river Hindon is deteriorating at an alarming rate due to various industrial, municipal and agricultural activities. The present study aimed at identifying the pollution sources and quantifying the degree to which these sources are responsible for the deteriorating water quality of the river. Various water quality parameters, like pH, temperature, electrical conductivity, total dissolved solids, total hardness, calcium, chloride, nitrate, sulphate, biological oxygen demand, chemical oxygen demand and total alkalinity were assessed. Water quality data obtained from eight study sites for one year has been subjected to the two multivariate techniques, namely, principal component analysis and cluster analysis. Principal component analysis was applied with the aim to find out spatial variability and to identify the sources responsible for the water quality of the river. Three Varifactors were obtained after varimax rotation of initial principal components using principal component analysis. Cluster analysis was carried out to classify sampling stations of certain similarity, which grouped eight different sites into two clusters. The study reveals that the anthropogenic influence (municipal, industrial, waste water and agricultural runoff) was the major source of river water pollution. Thus, this study illustrates the utility of multivariate statistical techniques for analysis and elucidation of multifaceted data sets, recognition of pollution sources/factors and understanding temporal/spatial variations in water quality for effective river water quality management.Keywords: cluster analysis, multivariate statistical techniques, river Hindon, water quality
Procedia PDF Downloads 464562 Isolation and Characterisation of Novel Environmental Bacteriophages Which Target the Escherichia coli Lamb Outer Membrane Protein
Authors: Ziyue Zeng
Abstract:
Bacteriophages are viruses which infect bacteria specifically. Over the past decades, phage λ has been extensively studied, especially its interaction with the Escherichia coli LamB (EcLamB) protein receptor. Nonetheless, despite the enormous numbers and near-ubiquity of environmental phages, aside from phage λ, there is a paucity of information on other phages which target EcLamB as a receptor. In this study, to answer the question of whether there are other EcLamB-targeting phages in the natural environment, a simple and convenient method was developed and used for isolating environmental phages which target a particular surface structure of a particular bacterium; in this case, the EcLamB outer membrane protein. From the enrichments with the engineered bacterial hosts, a collection of EcLamB-targeting phages (ΦZZ phages) were easily isolated. Intriguingly, unlike phage λ, an obligate EcLamB-dependent phage in the Siphoviridae family, the newly isolated ΦZZ phages alternatively recognised EcLamB or E. coli OmpC (EcOmpC) as a receptor when infecting E. coli. Furthermore, ΦZZ phages were suggested to represent new species in the Tequatrovirus genus in the Myoviridae family, based on phage morphology and genomic sequences. Most phages are thought to have a narrow host range due to their exquisite specificity in receptor recognition. With the ability to optionally recognise two receptors, ΦZZ phages were considered relatively promiscuous. Via the heterologous expression of EcLamB on the bacterial cell surface, the host range of ΦZZ phages was further extended to three different enterobacterial genera. Besides, an interesting selection of evolved phage mutants with a broader host range was isolated, and the key mutations involved in their evolution to adapt to new hosts were investigated by genomic analysis. Finally, and importantly, two ΦZZ phages were found to be putative generalised transducers, which could be exploited as tools for DNA manipulations.Keywords: environmental microbiology, phage, microbe-host interactions, microbial ecology
Procedia PDF Downloads 100561 (Re)Framing the Muslim Subject: Studying the Artistic Representation of Guantanamo and Abu Ghraib Detainees
Authors: Iqra Raza
Abstract:
This paper attempts to conceptualize the (de)humanization of the Muslim subject in Karen J. Greenberg and Janet Hamlin’s transmedia Sketching Guantanamo through a close study of the aesthetics and semiotics of the text. The Muslim experience, the paper shall argue, is mediated through a (de)humanization confined and incarcerated within the chains of artistic representation. Hamlin’s reliance on the distortions offered by stereotypes is reminiscent of the late Victorian epistemology on criminality, as evidenced most starkly in the sketch of Khalid Sheikh Mohammad. The position of the white artist thus becomes suspect in the enterprise of neo-Victorian ethnography. The visual stories of movement from within Guantanamo become potent; the paper shall argue, especially in juxtaposition with the images of stillness that came out from the detention centers, which portrayed the enactment of violence on individual bodies with a deliberate erasure of faces. So, while art becomes a way for reclaiming subjectivity or humanizing these identifiable bodies, the medium predicates itself on their objectification. The paper shall explore various questions about what it means for the (criminal?) subjects to be rendered into art rather than being photographed. Does art entail a necessary departure from the assumed objectivity of the photographic images? What makes art the preferred medium for (de)humanization of the violated Muslim bodies? What happens when art is produced without a recognition of the ‘precariousness’ of the life being portrayed? Rendering the detainees into art becomes a slippery task complicated by Hamlin’s privileged position outside the glass walls of the court. The paper shall adjourn analysis at the many dichotomies that exist in the text viz. between the White men and the brown, the Muslims and the Christians, Occident and the Orient problematized by Hamlin’s politics, that of a ‘neutral outsider’ which quickly turns on its head and becomes complicity in her deliberate erasure of the violence that shaped and still shapes Guantanamo.Keywords: Abu Ghraib, Derrida, Guantanamo, graphic journalism, Muslimness, orient, spectrality
Procedia PDF Downloads 152560 Identification of Promiscuous Epitopes for Cellular Immune Responses in the Major Antigenic Protein Rv3873 Encoded by Region of Difference 1 of Mycobacterium tuberculosis
Authors: Abu Salim Mustafa
Abstract:
Rv3873 is a relatively large size protein (371 amino acids in length) and its gene is located in the immunodominant genomic region of difference (RD)1 that is present in the genome of Mycobacterium tuberculosis but deleted from the genomes of all the vaccine strains of Bacillus Calmette Guerin (BCG) and most other mycobacteria. However, when tested for cellular immune responses using peripheral blood mononuclear cells from tuberculosis patients and BCG-vaccinated healthy subjects, this protein was found to be a major stimulator of cell mediated immune responses in both groups of subjects. In order to further identify the sequence of immunodominant epitopes and explore their Human Leukocyte Antigen (HLA)-restriction for epitope recognition, 24 peptides (25-mers overlapping with the neighboring peptides by 10 residues) covering the sequence of Rv3873 were synthesized chemically using fluorenylmethyloxycarbonyl chemistry and tested in cell mediated immune responses. The results of these experiments helped in the identification of an immunodominant peptide P9 that was recognized by people expressing varying HLA-DR types. Furthermore, it was also predicted to be a promiscuous binder with multiple epitopes for binding to HLA-DR, HLA-DP and HLA-DQ alleles of HLA-class II molecules that present antigens to T helper cells, and to HLA-class I molecules that present antigens to T cytotoxic cells. In addition, the evaluation of peptide P9 using an immunogenicity predictor server yielded a high score (0.94), which indicated a greater probability of this peptide to elicit a protective cellular immune response. In conclusion, P9, a peptide with multiple epitopes and ability to bind several HLA class I and class II molecules for presentation to cells of the cellular immune response, may be useful as a peptide-based vaccine against tuberculosis.Keywords: mycobacterium tuberculosis, PPE68, peptides, vaccine
Procedia PDF Downloads 135559 Cardiac Arrest after Cardiac Surgery
Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov
Abstract:
Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients
Procedia PDF Downloads 53558 Emotional Labour and Employee Performance Appraisal: The Missing Link in Some Hotels in South East Nigeria
Authors: Polycarp Igbojekwe
Abstract:
The main objective of this study was to determine if emotional labour has become a criterion in performance appraisal, job description, selection, and training schemes in the hotel industry in Nigeria. Our main assumption was that majority of hotel organizations have not built emotional labour into their human resources management schemes. Data were gathered by the use of structured questionnaires designed in Likert format, and interviews. The focus group was managers of the selected hotels. Analyses revealed that majority of the hotels have not built emotional labour into their human resources schemes particularly in the 1, 2, and 3-star hotels. It was observed that service employees of 1, 2, and 3-star hotels have not been adequately trained to perform emotional labour; a critical factor in quality service delivery. Managers of 1, 2, and 3-star hotels have not given serious thought to emotional labour as a critical factor in quality service delivery. The study revealed that suitability of an individual’s characteristics is not being considered as a criterion for selection and performance appraisal for service employees. The implication of this is that, person-job-fit is not seriously considered. It was observed that there has been a disconnect between required emotional competency, its recognition, evaluation, and training. Based on the findings of this study, it is concluded that selection, training, job description and performance appraisal instruments in use in hotels in Nigeria are inadequate. Human resource implications of the findings in this study are presented. It is recommended that hotel organizations should re-design and plan the emotional content and context of their human resources practices to reflect the emotional demands of front line jobs in the hotel industry and the crucial role emotional labour plays during service encounters.Keywords: emotional labour, employee selection, job description, performance appraisal, person-job-fit, employee compensation
Procedia PDF Downloads 191557 Influence of Distribution of Body Fat on Cholesterol Non-HDL and Its Effect on Kidney Filtration
Authors: Magdalena B. Kaziuk, Waldemar Kosiba
Abstract:
Background: In the XXI century we have to deal with the epidemic of obesity which is important risk factor for the cardiovascular and kidney diseases. Lipo proteins are directly involved in the atherosclerotic process. Non-high-density lipo protein (non-HDL) began following widespread recognition of its superiority over LDL as a measurement of vascular event risk. Non-HDL includes residual risk which persists in patients after achieved recommended level of LDL. Materials and Methods: The study covered 111 patients (52 females, 59 males, age 51,91±14 years), hospitalized on the intern department. Body composition was assessed using the bioimpendance method and anthropometric measurements. Physical activity data were collected during the interview. The nutritional status and the obesity type were determined with the Waist to Height Ratio and the Waist to Hip Ratio. A function of the kidney was evaluated by calculating the estimated glomerular filtration rate (eGFR) using MDRD formula. Non-HDL was calculated as a difference between concentration of the Total and HDL cholesterol. Results: 10% of patients were found to be underweight; 23.9 % had correct body weight; 15,08 % had overweight, while the remaining group had obesity: 51,02 %. People with the android shape have higher non-HDL cholesterol versus with the gynoid shape (p=0.003). The higher was non-HDL, the lower eGFR had studied subjects (p < 0.001). Significant correlation was found between high non-HDL and incorrect dietary habits in patients avoiding eating vegetables, fruits and having low physical activity (p < 0.005). Conclusions: Android type of figure raises the residual risk of the heart disease associated with higher levels of non-HDL. Increasing physical activity in these patients reduces the level of non-HDL. Non-HDL seems to be the best predictor among all cholesterol measures for the cardiovascular events and worsening eGFR.Keywords: obesity, non-HDL cholesterol, glomerular filtration rate, lifestyle
Procedia PDF Downloads 373556 Evaluating the Effect of 'Terroir' on Volatile Composition of Red Wines
Authors: María Luisa Gonzalez-SanJose, Mihaela Mihnea, Vicente Gomez-Miguel
Abstract:
The zoning methodology currently recommended by the OIVV as official methodology to carry out viticulture zoning studies and to define and delimit the ‘terroirs’ has been applied in this study. This methodology has been successfully applied on the most significant an important Spanish Oenological D.O. regions, such as Ribera de Duero, Rioja, Rueda and Toro, but also it have been applied around the world in Portugal, different countries of South America, and so on. This is a complex methodology that uses edaphoclimatic data but also other corresponding to vineyards and other soils’ uses The methodology is useful to determine Homogeneous Soil Units (HSU) to different scale depending on the interest of each study, and has been applied from viticulture regions to particular vineyards. It seems that this methodology is an appropriate method to delimit correctly the medium in order to enhance its uses and to obtain the best viticulture and oenological products. The present work is focused on the comparison of volatile composition of wines made from grapes grown in different HSU that coexist in a particular viticulture region of Castile-Lion cited near to Burgos. Three different HSU were selected for this study. They represented around of 50% of the global area of vineyards of the studied region. Five different vineyards on each HSU under study were chosen. To reduce variability factors, other criteria were also considered as grape variety, clone, rootstocks, vineyard’s age, training systems and cultural practices. This study was carried out during three consecutive years, then wine from three different vintage were made and analysed. Different red wines were made from grapes harvested in the different vineyards under study. Grapes were harvested to ‘Technological maturity’, which are correlated with adequate levels of sugar, acidity, phenolic content (nowadays named phenolic maturity), good sanitary stages and adequate levels of aroma precursors. Results of the volatile profile of the wines produced from grapes of each HSU showed significant differences among them pointing out a direct effect of the edaphoclimatic characteristic of each UHT on the composition of the grapes and then on the volatile composition of the wines. Variability induced by HSU co-existed with the well-known inter-annual variability correlated mainly with the specific climatic conditions of each vintage, however was most intense, so the wine of each HSU were perfectly differenced. A discriminant analysis allowed to define the volatiles with discriminant capacities which were 21 of the 74 volatiles analysed. Detected discriminant volatiles were chemical different, although .most of them were esters, followed by were superior alcohols and fatty acid of short chain. Only one lactone and two aldehydes were selected as discriminant variable, and no varietal aroma compounds were selected, which agree with the fact that all the wine were made from the same grape variety.Keywords: viticulture zoning, terroir, wine, volatile profile
Procedia PDF Downloads 221555 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties
Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier
Abstract:
The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA
Procedia PDF Downloads 65554 Cybersecurity for Digital Twins in the Built Environment: Research Landscape, Industry Attitudes and Future Direction
Authors: Kaznah Alshammari, Thomas Beach, Yacine Rezgui
Abstract:
Technological advances in the construction sector are helping to make smart cities a reality by means of cyber-physical systems (CPS). CPS integrate information and the physical world through the use of information communication technologies (ICT). An increasingly common goal in the built environment is to integrate building information models (BIM) with the Internet of Things (IoT) and sensor technologies using CPS. Future advances could see the adoption of digital twins, creating new opportunities for CPS using monitoring, simulation, and optimisation technologies. However, researchers often fail to fully consider the security implications. To date, it is not widely possible to assimilate BIM data and cybersecurity concepts, and, therefore, security has thus far been overlooked. This paper reviews the empirical literature concerning IoT applications in the built environment and discusses real-world applications of the IoT intended to enhance construction practices, people’s lives and bolster cybersecurity. Specifically, this research addresses two research questions: (a) how suitable are the current IoT and CPS security stacks to address the cybersecurity threats facing digital twins in the context of smart buildings and districts? and (b) what are the current obstacles to tackling cybersecurity threats to the built environment CPS? To answer these questions, this paper reviews the current state-of-the-art research concerning digital twins in the built environment, the IoT, BIM, urban cities, and cybersecurity. The results of these findings of this study confirmed the importance of using digital twins in both IoT and BIM. Also, eight reference zones across Europe have gained special recognition for their contributions to the advancement of IoT science. Therefore, this paper evaluates the use of digital twins in CPS to arrive at recommendations for expanding BIM specifications to facilitate IoT compliance, bolster cybersecurity and integrate digital twin and city standards in the smart cities of the future.Keywords: BIM, cybersecurity, digital twins, IoT, urban cities
Procedia PDF Downloads 169553 Beta-Carotene Attenuates Cognitive and Hepatic Impairment in Thioacetamide-Induced Rat Model of Hepatic Encephalopathy via Mitigation of MAPK/NF-κB Signaling Pathway
Authors: Marawan Abd Elbaset Mohamed, Hanan A. Ogaly, Rehab F. Abdel-Rahman, Ahmed-Farid O.A., Marwa S. Khattab, Reham M. Abd-Elsalam
Abstract:
Liver fibrosis is a severe worldwide health concern due to various chronic liver disorders. Hepatic encephalopathy (HE) is one of its most common complications affecting liver and brain cognitive function. Beta-Carotene (B-Car) is an organic, strongly colored red-orange pigment abundant in fungi, plants, and fruits. The study attempted to know B-Car neuroprotective potential against thioacetamide (TAA)-induced neurotoxicity and cognitive decline in HE in rats. Hepatic encephalopathy was induced by TAA (100 mg/kg, i.p.) three times per week for two weeks. B-Car was given orally (10 or 20 mg/kg) daily for two weeks after TAA injections. Organ body weight ratio, Serum transaminase activities, liver’s antioxidant parameters, ammonia, and liver histopathology were assessed. Also, the brain’s mitogen-activated protein kinase (MAPK), nuclear factor kappa B (NF-κB), antioxidant parameters, adenosine triphosphate (ATP), adenosine monophosphate (AMP), norepinephrine (NE), dopamine (DA), serotonin (5-HT), 5-hydroxyindoleacetic acid (5-HIAA) cAMP response element-binding protein (CREB) expression and B-cell lymphoma 2 (Bcl-2) expression were measured. The brain’s cognitive functions (Spontaneous locomotor activity, Rotarod performance test, Object recognition test) were assessed. B-Car prevented alteration of the brain’s cognitive function in a dose-dependent manner. The histopathological outcomes supported these biochemical evidences. Based on these results, it could be established that B-Car could be assigned to treat the brain’s neurotoxicity consequences of HE via downregualtion of MAPK/NF-κB signaling pathways.Keywords: beta-carotene, liver injury, MAPK, NF-κB, rat, thioacetamide
Procedia PDF Downloads 154552 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 136551 An Integrated Research of Airline Sponsorship
Authors: Stephen W. Wang
Abstract:
This research aims to explore the multi-faceted structure of airline passengers’ perception of airline sponsorship, and its impact on airline passengers and even consumers on airline brand preferences and brand equity. The connotation of this research is mainly divided into two parts. The first part of the research focuses on exploring the connotation and sub-dimensions of “air passengers’ perception of airline sponsorship”; the second part of the research focuses on integrating “air passengers’ perception on the multi-factor aspect of the corporate sponsorship, “brand transfer theory” and “brand theory”, explores the influence of airlines’ commitment to corporate sponsorship activities on the brand equity and brand preferences of airline passengers, and on passengers’ subsequent behavioral intentions . In addition, in order to clarify the differences between different types of corporate sponsorship activities and events in terms of "air passengers' perception of airline corporate sponsorship activities", brand transfer, brand preference, brand equity and behavioral intentions, this research also focuses on moderating effects of corporate sponsorship events. With the apply of multi-group structural equation model, it is hoped that the effectiveness of the sponsorship activities of airline companies will be improved. In terms of theoretical and practical implications, the aviation industry can follow the results of this research to understand which corporate sponsorship perceptions have a greater impact on consumers, which has important practical significance. The second part of the research project, from the consumer's point of view, understands whether airline corporate sponsorship activities influence behavioral intentions through brand transfer and brand recognition. Through the analysis of the intermediary effect of brand transfer, brand preference and brand equity, the results of this research can provide a more complete and powerful explanation for “why” airlines’ commitment to corporate sponsorship activities can affect airline passengers’ purchase intentions, which will help fill in the gap of the theoretical and practical research on "airline corporate sponsorship", and has its theoretical significance.Keywords: airline, sponsorship, brand image transfer, brand preference
Procedia PDF Downloads 30550 Traditional Mechanisms of Conflict Resolution in Africa: A Pathway to Sustainable Peace in Nigeria
Authors: Ejovi Eghwubare Augustine
Abstract:
This study delved into the traditional mechanisms of conflict resolution in Africa, a pathway to sustainable peace in Nigeria. It deployed the quantitative and qualitative methods of data collection and content analysis. The work adopted the Peace Process theory propounded by John Darby and Roger Macunity. It ascertained that disputes or disagreements are unarguably and necessarily an inevitable part of human existence, flowing directly from communication, interaction, and relationships which can occur at individual and national levels, even at international levels in view of the current trend of globalization. The alternative Dispute Resolution (ADR) mechanism is a basket of procedures outside the traditional process of litigation or strict determination of legal rights. It may also be elucidated as a range of procedures that serve as generally involve the intercession and assistance of a neutral and impartial third party. The traditional mechanisms of conflict resolution in Africa are alien to the Western world; this paper is of utmost importance to the Western world and also enriched their pool of literature. Nigeria is a country that is dominated by various ethnic groups anchored on diverse cultures, customs, and traditions. It is, therefore, not surprising to see conflicts arise, and despite the various attempts at resolving these conflicts through litigation, they still remained unabated. The paper investigated the lessons learned from Traditional Mechanisms of Conflict resolution; it also interrogated its impact and the way forward. In light of the lessons that were learned and the impact of the traditional mechanisms of conflict resolution, suggestions on how to attain a sustainable, peaceful society were proffered. In conclusion, the study crystallized reforms on the alternative dispute resolution introduced through the traditional mechanism, which includes, amongst others, that constitutional recognition should be given to traditional institutions of conflict resolution to enable quick dispensation of matters.Keywords: traditional, conflict, peace, resolution
Procedia PDF Downloads 72549 Free Energy Computation of A G-Quadruplex-Ligand Structure: A Classical Molecular Dynamics and Metadynamics Simulation Study
Authors: Juan Antonio Mondragon Sanchez, Ruben Santamaria
Abstract:
The DNA G-quadruplex is a four-stranded DNA structure formed by stacked planes of four base paired guanines (G-quartet). Guanine rich DNA sequences appear in many sites of genomic DNA and can potential form G-quadruplexes, such as those occurring at 3'-terminus of the human telomeric DNA. The formation and stabilization of a G-quadruplex by small ligands at the telomeric region can inhibit the telomerase activity. In turn, the ligands can be used to down regulate oncogene expression making G-quadruplex an attractive target for anticancer therapy. Many G-quadruplex ligands have been proposed with a planar core to facilitate the pi–pi stacking and electrostatic interactions with the G-quartets. However, many drug candidates are impossibilitated to discriminate a G-quadruplex from a double helix DNA structure. In this context, it is important to investigate the site topology for the interaction of a G-quadruplex with a ligand. In this work, we determine the free energy surface of a G-quadruplex-ligand to study the binding modes of the G-quadruplex (TG4T) with the daunomycin (DM) drug. The complex TG4T-DM is studied using classical molecular dynamics in combination with metadynamics simulations. The metadynamics simulations permit an enhanced sampling of the conformational space with a modest computational cost and obtain free energy surfaces in terms of the collective variables (CV). The free energy surfaces of TG4T-DM exhibit other local minima, indicating the presence of additional binding modes of daunomycin that are not observed in short MD simulations without the metadynamics approach. The results are compared with similar calculations on a different structure (the mutated mu-G4T-DM where the 5' thymines on TG4T-DM have been deleted). The results should be of help to design new G-quadruplex drugs, and understand the differences in the recognition topology sites of the duplex and quadruplex DNA structures in their interaction with ligands.Keywords: g-quadruplex, cancer, molecular dynamics, metadynamics
Procedia PDF Downloads 459548 Psychedelic Assisted-Treatment for Patients with Opioid Use Disorder
Authors: Daniele Zullino, Gabriel Thorens, Léonice Furtado, Federico Seragnoli, Radu Iuga, Louise Penzenstadler
Abstract:
Context: Since the start of the 21st century, there has been a resurgence of interest in psychedelics, marked by a renewed focus on scientific investigations into their therapeutic potential. While psychedelic therapy has gained recognition for effectively treating depression and anxiety disorders, notable progress has been made in the clinical development of substances like psilocybin. Moreover, mounting evidence suggests promising applications of Lysergic acid diethylamide (LSD) and psilocybin in the field of addiction medicine. In Switzerland, compassionate treatment with LSD and psilocybin has been permitted since 2014 through exceptional licenses granted by the Federal Office of Public Health. This treatment approach is also available within the Geneva treatment program, extending its accessibility to patients undergoing opioid-assisted treatment involving substances like morphine and diacetylmorphine. The aim of this study is to assess the feasibility of psychedelic-assisted therapy in patients with opioid use disorder who are undergoing opioid-assisted treatment. This study addresses the question of whether psychedelic-assisted therapy can be successfully implemented in patients with opioid use disorder. It also explores the effects of psychedelic therapy on the patient's experiences and outcomes. Methodology: This is an open case series on six patients who have undergone at least one session with either LSD (100-200 micrograms) or psilocybin (20-40 mg). The patients were assessed using the Five Dimensional Altered States of Consciousness (5D-ASC)-Scale. The data were analyzed descriptively to identify patterns and trends in the patients' experiences. Results: The patients experienced substantial positive psychedelic effects during the psychedelic sessions without significant adverse effects. The patients reported positive experiences and improvements in their condition. Conclusion: The findings of this study support the feasibility and potential efficacy of psychedelic-assisted therapy in patients undergoing opioid-assisted treatment.Keywords: psychedelics, psychedelic-assisted treatment, opioid use disorder, addiction, LSD, psilocybin
Procedia PDF Downloads 55547 The 'Plain Style' in the Theory and Practice of Project Design: Contributions to the Shaping of an Urban Image on the Waterfront Prior to the 1755 Earthquake
Authors: Armenio Lopes, Carlos Ferreira
Abstract:
In the specific context of the Iberian Union between 1580 and 1640, characteristics emerged in Portuguese architecture that stood out from the main architectural production of the period. Recognised and identified aspects that had begun making their appearance decades before (1521) became significantly more marked during the Hapsburg-Spanish occupation. Distinctive even from the imperialist language of Spain, this trend would endure even after the restoration of independence (1706), continuing through to the start of the age of absolutism. Or perhaps not. This trend, recognised as Plain Style (Kubler), associated with a certain scarcity of resources, involved a certain formal and decorative simplification, as well as a particular set of conventions that would subsequently mark the landscape. This expression could also be seen as a means of asserting a certain spirit of independence as the Iberian Union breathed its last. The image of a simple, bare-bones architecture with purer design lines is associated by various authors –most notably Kubler– with the narratives of modernism, to whose principles it is similar, in a context-specific to the period. There is a contrast with some of the exuberance of the baroque or its expression in the Manueline period, in a similar fashion to modernism's responses to nineteenth-century eclecticism. This assertion and practice of simple architecture, drafted from the interpretation of the treaties, and highlighting a certain classical inspiration, was to become a benchmark in the theory of architecture, spanning the Baroque and Mannerism, until achieving contemporary recognition within certain originality and modernity. At a time when the baroque and its scenography became generally very widespread, it is important also to recognise the role played by plain style architecture in the construction of a rather complex and contradictory waterfront landscape, featuring promises of exuberance and more discrete practices.Keywords: Carlos Mardel, Lisbon's waterfront, plain style, urban image on the waterfront
Procedia PDF Downloads 138546 Adopt and Apply Research-Supported Standards and Practices to Ensure Quality for Online Education and Digital Learning at Course, Program and Institutional Levels
Authors: Yaping Gao
Abstract:
With the increasing globalization of education and the continued momentum and wider adoption of online and digital learning all over the world, post pandemic, how could best practices and extensive experience gained from the higher education community over the past few decades be adopted and adapted to benefit international communities, which can be vastly different culturally and pedagogically? How can schools and institutions adopt, adapt and apply these proven practices to develop strategic plans for digital transformation at institutional levels, and to improve or create quality online or digital learning environments at course and program levels to help all students succeed? The presenter will introduce the primary components of the US-based quality assurance process, including : 1) five sets of research-supported standards to guide the design, development and review of online and hybrid courses; 2) professional development offerings and pathways for administrators, faculty and instructional support staff; 3) a peer-review process for course/program reviews resulting in constructive recommendations for continuous improvement, certification of quality and international recognition; and 4) implementation of the quality assurance process on a continuum to program excellence, achievement of institutional goals, and facilitation of accreditation process and success. Regardless language, culture, pedagogical practices, or technological infrastructure, the core elements of quality teaching and learning remain the same across all delivery formats. What is unique is how to ensure quality of teaching and learning in online education and digital learning. No one knows all the answers to everything but no one needs to reinvent the wheel either. Together the international education community can support and learn from each other to achieve institutional goals and ensure all students succeed in the digital learning environments.Keywords: Online Education, Digital Learning, Quality Assurance, Standards and Best Practices
Procedia PDF Downloads 25545 Epididymis in the Agouti (Dasyprocta azarae): Light Microscope Study
Authors: Bruno C. Schimming, Leandro L. Martins, PatríCia F. F. Pinheiro, Raquel F. Domeniconi, FabríCio S. Oliveira
Abstract:
The agouti is a wildlife rodent that can be used as an alternative source of animal protein and this species has been raised in captivity in Brazil with the aim of providing meat. Thus, the knowledge of their reproductive biology and morphology of the reproductive organs is important. The objective of this study was to describe the morphology of epididymis in the Azara’s agouti, by light microscopy. Samples of epididymis were obtained from five adult Azara’s agouti (Dasyprocta azarae) during castration surgery performed at the Municipal Zoo of Catanduva, Brazil. Fragments of the epididymal regions (initial segment, caput, corpus and cauda) were collected. The biological samples were immediately fixed in paraformaldehyde for 24 hours, followed by histologic procedures comprising embedding in ParaplastTM (Sigma, St. Louis, MO, USA), sections of 5 µm, and staining with HE and Masson’s trichrome. The epididymis was a highly convoluted tubule that links the testis to the vas deferens. The epithelium lining was pseudostratified columnar surrounded by a periductal stroma. The epithelium contains several cell types: principal, basal, apical, clear, and hallo cells. Principal cells were the most abundant cell type. There were observed also migratory cells named halo cells. The caput epididymis was divided into two different regions: initial segment and caput. The initial segment has a very wide lumen, a high epithelium with conspicuous microvilli and the lumen was wide with exfoliated material. The other region of the caput epididymis, showed a lower epithelium when compared with the initial segment, large amounts of spermatozoa in the lumen, and a cytoplasmic vacuolization. This region presented many narrows cells. Many spermatozoa appeared in the lumen of corpus epididymis. The cauda region had a lower epithelium than the other epididymal regions in the agouti. The cauda epithelium presented plicae protruding into the lumen. Large amounts of spermatozoa are also present in the lumen. Small microvilli uniformly arranged so as to form a kind of “brush border” are observed on the apical surface of the cauda epithelium. The pattern of the epithelium lining the duct of the agouti epididymis does not differ greatly from that reported to other mammals, such as domestic and wildlife animals. These findings can cooperate with future investigations especially those related to rational exploration of these animals. All experimental procedures were approved by the institutional ethics committee (CEUA 796/2015). This study was supported by FAPESP (Grants 2015/23822-1).Keywords: wildlife, testis excurrent ducts, epididymis, morphology
Procedia PDF Downloads 236544 A Text in Movement in the Totonac Flyers’ Dance: A Performance-Linguistic Theory
Authors: Luisa Villani
Abstract:
The proposal aims to express concerns about the connection between mind, body, society, and environment in the Flyers’ dance, a very well-known rotatory dance in Mexico, to create meanings and to make the apprehension of the world possible. The interaction among the brain, mind, body, and environment, and the intersubjective relation among them, means the world creates and recreates a social interaction. The purpose of this methodology, based on the embodied cognition theory, which was named “A Performance-Embodied Theory” is to find the principles and patterns that organize the culture and the rules of the apprehension of the environment by Totonac people while the dance is being performed. The analysis started by questioning how anthropologists can interpret how Totonacs transform their unconscious knowledge into conscious knowledge and how the scheme formation of imagination and their collective imagery is understood in the context of public-facing rituals, such as Flyers’ dance. The problem is that most of the time, researchers interpret elements in a separate way and not as a complex ritual dancing whole, which is the original contribution of this study. This theory, which accepts the fact that people are body-mind agents, wants to interpret the dance as a whole, where the different elements are joined to an integral interpretation. To understand incorporation, data was recollected in prolonged periods of fieldwork, with participant observation and linguistic and extralinguistic data analysis. Laban’s notation for the description and analysis of gestures and movements in the space was first used, but it was later transformed and gone beyond this method, which is still a linear and compositional one. Performance in a ritual is the actualization of a potential complex of meanings or cognitive domains among many others in a culture: one potential dimension becomes probable and then real because of the activation of specific meanings in a context. It can only be thought what language permits thinking, and the lexicon that is used depends on the individual culture. Only some parts of this knowledge can be activated at once, and these parts of knowledge are connected. Only in this way, the world can be understood. It can be recognized that as languages geometrize the physical world thanks to the body, also ritual does. In conclusion, the ritual behaves as an embodied grammar or a text in movement, which, depending on the ritual phases and the words and sentences pronounced in the ritual, activates bits of encyclopedic knowledge that people have about the world. Gestures are not given by the performer but emerge from the intentional perception in which gestures are “understood” by the audio-spectator in an inter-corporeal way. The impact of this study regards the possibility not only to disseminate knowledge effectively but also to generate a balance between different parts of the world where knowledge is shared, rather than being received by academic institutions alone. This knowledge can be exchanged, so indigenous communities and academies could be together as part of the activation and the sharing of this knowledge with the world.Keywords: dance, flyers, performance, embodied, cognition
Procedia PDF Downloads 58543 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 121542 New Bio-Strategies for Ochratoxin a Detoxification Using Lactic Acid Bacteria
Authors: José Maria, Vânia Laranjo, Luís Abrunhosa, António Inês
Abstract:
The occurrence of mycotoxigenic moulds such as Aspergillus, Penicillium and Fusarium in food and feed has an important impact on public health, by the appearance of acute and chronic mycotoxicoses in humans and animals, which is more severe in the developing countries due to lack of food security, poverty and malnutrition. This mould contamination also constitutes a major economic problem due the lost of crop production. A great variety of filamentous fungi is able to produce highly toxic secondary metabolites known as mycotoxins. Most of the mycotoxins are carcinogenic, mutagenic, neurotoxic and immunosuppressive, being ochratoxin A (OTA) one of the most important. OTA is toxic to animals and humans, mainly due to its nephrotoxic properties. Several approaches have been developed for decontamination of mycotoxins in foods, such as, prevention of contamination, biodegradation of mycotoxins-containing food and feed with microorganisms or enzymes and inhibition or absorption of mycotoxin content of consumed food into the digestive tract. Some group of Gram-positive bacteria named lactic acid bacteria (LAB) are able to release some molecules that can influence the mould growth, improving the shelf life of many fermented products and reducing health risks due to exposure to mycotoxins. Some LAB are capable of mycotoxin detoxification. Recently our group was the first to describe the ability of LAB strains to biodegrade OTA, more specifically, Pediococcus parvulus strains isolated from Douro wines. The pathway of this biodegradation was identified previously in other microorganisms. OTA can be degraded through the hydrolysis of the amide bond that links the L-β-phenylalanine molecule to the ochratoxin alpha (OTα) a non toxic compound. It is known that some peptidases from different origins can mediate the hydrolysis reaction like, carboxypeptidase A an enzyme from the bovine pancreas, a commercial lipase and several commercial proteases. So, we wanted to have a better understanding of this OTA degradation process when LAB are involved and identify which molecules where present in this process. For achieving our aim we used some bioinformatics tools (BLAST, CLUSTALX2, CLC Sequence Viewer 7, Finch TV). We also designed specific primers and realized gene specific PCR. The template DNA used came from LAB strains samples of our previous work, and other DNA LAB strains isolated from elderberry fruit, silage, milk and sausages. Through the employment of bioinformatics tools it was possible to identify several proteins belonging to the carboxypeptidase family that participate in the process of OTA degradation, such as serine type D-Ala-D-Ala carboxypeptidase and membrane carboxypeptidase. In conclusions, this work has identified carboxypeptidase proteins being one of the molecules present in the OTA degradation process when LAB are involved.Keywords: carboxypeptidase, lactic acid bacteria, mycotoxins, ochratoxin a.
Procedia PDF Downloads 462