Search results for: game outcome prediction
254 Explaining Irregularity in Music by Entropy and Information Content
Authors: Lorena Mihelac, Janez Povh
Abstract:
In 2017, we conducted a research study using data consisting of 160 musical excerpts from different musical styles, to analyze the impact of entropy of the harmony on the acceptability of music. In measuring the entropy of harmony, we were interested in unigrams (individual chords in the harmonic progression) and bigrams (the connection of two adjacent chords). In this study, it has been found that 53 musical excerpts out from 160 were evaluated by participants as very complex, although the entropy of the harmonic progression (unigrams and bigrams) was calculated as low. We have explained this by particularities of chord progression, which impact the listener's feeling of complexity and acceptability. We have evaluated the same data twice with new participants in 2018 and with the same participants for the third time in 2019. These three evaluations have shown that the same 53 musical excerpts, found to be difficult and complex in the study conducted in 2017, are exhibiting a high feeling of complexity again. It was proposed that the content of these musical excerpts, defined as “irregular,” is not meeting the listener's expectancy and the basic perceptual principles, creating a higher feeling of difficulty and complexity. As the “irregularities” in these 53 musical excerpts seem to be perceived by the participants without being aware of it, affecting the pleasantness and the feeling of complexity, they have been defined as “subliminal irregularities” and the 53 musical excerpts as “irregular.” In our recent study (2019) of the same data (used in previous research works), we have proposed a new measure of the complexity of harmony, “regularity,” based on the irregularities in the harmonic progression and other plausible particularities in the musical structure found in previous studies. We have in this study also proposed a list of 10 different particularities for which we were assuming that they are impacting the participant’s perception of complexity in harmony. These ten particularities have been tested in this paper, by extending the analysis in our 53 irregular musical excerpts from harmony to melody. In the examining of melody, we have used the computational model “Information Dynamics of Music” (IDyOM) and two information-theoretic measures: entropy - the uncertainty of the prediction before the next event is heard, and information content - the unexpectedness of an event in a sequence. In order to describe the features of melody in these musical examples, we have used four different viewpoints: pitch, interval, duration, scale degree. The results have shown that the texture of melody (e.g., multiple voices, homorhythmic structure) and structure of melody (e.g., huge interval leaps, syncopated rhythm, implied harmony in compound melodies) in these musical excerpts are impacting the participant’s perception of complexity. High information content values were found in compound melodies in which implied harmonies seem to have suggested additional harmonies, affecting the participant’s perception of the chord progression in harmony by creating a sense of an ambiguous musical structure.Keywords: entropy and information content, harmony, subliminal (ir)regularity, IDyOM
Procedia PDF Downloads 131253 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation
Authors: Miguel Contreras, David Long, Will Bachman
Abstract:
Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models
Procedia PDF Downloads 205252 Single Stage “Fix and Flap” Orthoplastic Approach to Severe Open Tibial Fractures: A Systematic Review of the Outcomes
Authors: Taylor Harris
Abstract:
Gustilo-anderson grade III tibial fractures are exquisitely difficult injuries to manage as they require extensive soft tissue repair in addition to fracture fixation. These injuries are best managed collaboratively by Orthopedic and Plastic surgeons. While utilizing an Orthoplastics approach has decreased the rates of adverse outcomes in these injuries, there is a large amount of variation in exactly how an Orthoplastics team approaches complex cases such as these. It is sometimes recommended that definitive bone fixation and soft tissue coverage be completed simultaneously in a single-stage manner, but there is a paucity of large scale studies to provide evidence to support this recommendation. It is the aim of this study to report the outcomes of a single-stage "fix-and-flap" approach through a systematic review of the available literature. Hopefully, this better informs an evidence-based Orthoplastics approach to managing open tibial fractures. Systematic review of the literature was performed. Medline and Google Scholar were used and all studies published since 2000, in English were included. 103 studies were initially evaluated for inclusion. Reference lists of all included studies were also examined for potentially eligible studies. Gustilo grade III tibial shaft fractures in adults that were managed with a single-stage Orthoplastics approach were identified and evaluated with regard to outcomes of interest. Exclusion criteria included studies with patients <16 years old, case studies, systemic reviews, meta-analyses. Primary outcomes of interest were the rates of deep infections and rates of limb salvage. Secondary outcomes of interest included time to bone union, rates of non-union, and rates of re-operation. 15 studies were eligible. 11 of these studies reported rates of deep infection as an outcome, with rates ranging from 0.98%-20%. The pooled rate between studies was 7.34%. 7 studies reported rates of limb salvage with a range of 96.25%-100%. The pooled rate of the associated studies was 97.8%. 6 reported rates of non-union with a range of 0%-14%, a pooled rate of 6.6%. 6 reported time to bone union with a range of 24 to 40.3 weeks and a pooled average time of 34.2 weeks, and 4 reported rates of reoperation ranging from 7%-55%, with a pooled rate of 31.1%. A few studies that compared a single stage to a multi stage approach side-by-side unanimously favored the single stage approach. Outcomes of Gustilo grade III open tibial fractures utilizing an Orthoplastics approach that is specifically done in a single-stage produce low rates of adverse outcomes. Large scale studies of Orthoplastic collaboration that were not completed in strictly a single stage, or were completed in multiple stages, have not reported as favorable outcomes. We recommend that not only should Orthopedic surgeons and Plastic surgeons collaborate in the management of severe open tibial fracture, but they should plan to undergo definitive fixation and coverage in a single-stage for improved outcomes.Keywords: orthoplastic, gustilo grade iii, single-stage, trauma, systematic review
Procedia PDF Downloads 86251 Experimental Investigation on Tensile Durability of Glass Fiber Reinforced Polymer (GFRP) Rebar Embedded in High Performance Concrete
Authors: Yuan Yue, Wen-Wei Wang
Abstract:
The objective of this research is to comprehensively evaluate the impact of alkaline environments on the durability of Glass Fiber Reinforced Polymer (GFRP) reinforcements in concrete structures and further explore their potential value within the construction industry. Specifically, we investigate the effects of two widely used high-performance concrete (HPC) materials on the durability of GFRP bars when embedded within them under varying temperature conditions. A total of 279 GFRP bar specimens were manufactured for microcosmic and mechanical performance tests. Among them, 270 specimens were used to test the residual tensile strength after 120 days of immersion, while 9 specimens were utilized for microscopic testing to analyze degradation damage. SEM techniques were employed to examine the microstructure of GFRP and cover concrete. Unidirectional tensile strength experiments were conducted to determine the remaining tensile strength after corrosion. The experimental variables consisted of four types of concrete (engineering cementitious composite (ECC), ultra-high-performance concrete (UHPC), and two types of ordinary concrete with different compressive strengths) as well as three acceleration temperatures (20, 40, and 60℃). The experimental results demonstrate that high-performance concrete (HPC) offers superior protection for GFRP bars compared to ordinary concrete. Two types of HPC enhance durability through different mechanisms: one by reducing the pH of the concrete pore fluid and the other by decreasing permeability. For instance, ECC improves embedded GFRP's durability by lowering the pH of the pore fluid. After 120 days of immersion at 60°C under accelerated conditions, ECC (pH=11.5) retained 68.99% of its strength, while PC1 (pH=13.5) retained 54.88%. On the other hand, UHPC enhances FRP steel's durability by increasing porosity and compactness in its protective layer to reinforce FRP reinforcement's longevity. Due to fillers present in UHPC, it typically exhibits lower porosity, higher densities, and greater resistance to permeation compared to PC2 with similar pore fluid pH levels, resulting in varying degrees of durability for GFRP bars embedded in UHPC and PC2 after 120 days of immersion at a temperature of 60°C - with residual strengths being 66.32% and 60.89%, respectively. Furthermore, SEM analysis revealed no noticeable evidence indicating fiber deterioration in any examined specimens, thus suggesting that uneven stress distribution resulting from interface segregation and matrix damage emerges as a primary causative factor for tensile strength reduction in GFRP rather than fiber corrosion. Moreover, long-term prediction models were utilized to calculate residual strength values over time for reinforcement embedded in HPC under high temperature and high humidity conditions - demonstrating that approximately 75% of its initial strength was retained by reinforcement embedded in HPC after 100 years of service.Keywords: GFRP bars, HPC, degeneration, durability, residual tensile strength.
Procedia PDF Downloads 56250 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution
Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit
Abstract:
Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics
Procedia PDF Downloads 42249 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 80248 Urban Stratification as a Basis for Analyzing Political Instability: Evidence from Syrian Cities
Authors: Munqeth Othman Agha
Abstract:
The historical formation of urban centres in the eastern Arab world was shaped by rapid urbanization and sudden transformation from the age of the pre-industrial to a post-industrial economy, coupled with uneven development, informal urban expansion, and constant surges in unemployment and poverty rates. The city was stratified accordingly as overlapping layers of division and inequality that have been built on top of each other, creating complex horizontal and vertical divisions based on economic, social, political, and ethno-sectarian basis. This has been further exacerbated during the neoliberal era, which transferred the city into a sort of dual city that is inhabited by heterogeneous and often antagonistic social groups. Economic deprivation combined with a growing sense of marginalization and inequality across the city planted the seeds of political instability, outbreaking in 2011. Unlike other popular uprisings that occupy central squares, as in Egypt and Tunisia, the Syrian uprising in 2011 took place mainly within inner streets and neighborhood squares, mobilizing primarily on more or less upon the lines of stratification. This has emphasized the role of micro-urban and social settings in shaping mobilization and resistance tactics, which necessitates us to understand the way the city was stratified and place it at the center of the city-conflict nexus analysis. This research aims to understand to what extent pre-conflict urban stratification lines played a role in determining the different trajectories of three cities’ neighborhoods (Homs, Dara’a and Deir-ez-Zor). The main argument of the paper is that the way the Syrian city has been stratified creates various social groups within the city who have enjoyed different levels of accessibility to life chances, material resources and social statuses. This determines their relationship with other social groups in the city and, more importantly, their relationship with the state. The advent of a political opportunity will be depicted differently across the city’s different social groups according to their perceived interests and threats, which consequently leads to either political mobilization or demobilization. Several factors, including the type of social structures, built environment, and state response, determine the ability of social actors to transfer the repertoire of contention to collective action or transfer from social actors to political actors. The research uses urban stratification lines as the basis for understanding the different patterns of political upheavals in urban areas while explaining why neighborhoods with different social and urban environment settings had different abilities and capacities to mobilize, resist state repression and then descend into a military conflict. It particularly traces the transformation from social groups to social actors and political actors by applying the Explaining-outcome Process-Tracing method to depict the causal mechanisms that led to including or excluding different neighborhoods from each stage of the uprising, namely mobilization (M1), response (M2), and control (M3).Keywords: urban stratification, syrian conflict, social movement, process tracing, divided city
Procedia PDF Downloads 72247 Computational Investigation on Structural and Functional Impact of Oncogenes and Tumor Suppressor Genes on Cancer
Authors: Abdoulie K. Ceesay
Abstract:
Within the sequence of the whole genome, it is known that 99.9% of the human genome is similar, whilst our difference lies in just 0.1%. Among these minor dissimilarities, the most common type of genetic variations that occurs in a population is SNP, which arises due to nucleotide substitution in a protein sequence that leads to protein destabilization, alteration in dynamics, and other physio-chemical properties’ distortions. While causing variations, they are equally responsible for our difference in the way we respond to a treatment or a disease, including various cancer types. There are two types of SNPs; synonymous single nucleotide polymorphism (sSNP) and non-synonymous single nucleotide polymorphism (nsSNP). sSNP occur in the gene coding region without causing a change in the encoded amino acid, while nsSNP is deleterious due to its replacement of a nucleotide residue in the gene sequence that results in a change in the encoded amino acid. Predicting the effects of cancer related nsSNPs on protein stability, function, and dynamics is important due to the significance of phenotype-genotype association of cancer. In this thesis, Data of 5 oncogenes (ONGs) (AKT1, ALK, ERBB2, KRAS, BRAF) and 5 tumor suppressor genes (TSGs) (ESR1, CASP8, TET2, PALB2, PTEN) were retrieved from ClinVar. Five common in silico tools; Polyphen, Provean, Mutation Assessor, Suspect, and FATHMM, were used to predict and categorize nsSNPs as deleterious, benign, or neutral. To understand the impact of each variation on the phenotype, Maestro, PremPS, Cupsat, and mCSM-NA in silico structural prediction tools were used. This study comprises of in-depth analysis of 10 cancer gene variants downloaded from Clinvar. Various analysis of the genes was conducted to derive a meaningful conclusion from the data. Research done indicated that pathogenic variants are more common among ONGs. Our research also shows that pathogenic and destabilizing variants are more common among ONGs than TSGs. Moreover, our data indicated that ALK(409) and BRAF(86) has higher benign count among ONGs; whilst among TSGs, PALB2(1308) and PTEN(318) genes have higher benign counts. Looking at the individual cancer genes predisposition or frequencies of causing cancer according to our research data, KRAS(76%), BRAF(55%), and ERBB2(36%) among ONGs; and PTEN(29%) and ESR1(17%) among TSGs have higher tendencies of causing cancer. Obtained results can shed light to the future research in order to pave new frontiers in cancer therapies.Keywords: tumor suppressor genes (TSGs), oncogenes (ONGs), non synonymous single nucleotide polymorphism (nsSNP), single nucleotide polymorphism (SNP)
Procedia PDF Downloads 86246 Expanding Behavioral Crisis Care: Expansion of Psychiatric and Addiction-Care Services through a 23/7 Behavioral Crisis Center
Authors: Garima Singh
Abstract:
Objectives: Behavioral Crisis Center (BCC) is a community solution to a community problem. There has been an exponential increase in the incidence and prevalence of mental health crises around the world. The effects of the crisis negatively impact our patients and their families and strain the law enforcement and emergency room. The goal of the multi-disciplinary care model is to break the crisis cycle and provide 24-7 rapid access to an acre and crisis stabilization. We initiated our first BCC care center in 2020 in the midst of the COVID pandemic and have seen a remarkable improvement in patient ‘care and positive financial outcome. Background: Mental illnesses are common in the United States. Nearly one in five U.S. adults live with a mental illness (52.9 million in 2020). This number represented 21.0% of all U.S. adults. To address some of these challenges and help our community, In May 2020, we opened our first Behavioral crisis center (BCC). Since then, we have served more than 2500 patients and is the first southwest Missouri’s first 24/7 facility for crisis–level behavioral health and substance use needs. It has been proven to be a more effective place than emergency departments, jails, or local law enforcement. Methods: BCC was started in 2020 to serve the unmet need of the community and provide access to behavioral health and substance use services identified in the community. Funding was possible with significant investment from the county and Missouri Foundation for Health, with contributions from medical partners. It is a multi-disciplinary care center consisting of Physicians, nurse practitioners, nurses, behavioral technicians, peer support specialists, clinical intake specialists, and clinical coordinators and hospitality specialists. The center provides services including psychiatry care, outpatient therapy, community support services, primary care, peer support and engagement. It is connected to a residential treatment facility for substance use treatment for continuity of care and bridging the gap, which has resulted in the completion of treatment and better outcomes. Results: BCC has proven to be a great resource to the community and the Missouri Health Coalition is providing funding to replicate the model in other regions and work on a similar model for children and adolescents. Overall, 29% of the patients seen at BCC are stabilized and discharged with outpatient care. 50% needed acute stabilization in a hospital setting and 21% required long-term admission, mostly for substance use treatment. The local emergency room had a 42% reduction in behavioral health encounters compared to the previous 3 years. Also, by a quick transfer to BCC, the average stay in ER was reduced by 10 hours and time to follow up behavioral health assessment decreased by an average of 4 hours. Uninsured patients are also provided Medicaid application assistance which has benefited 55% of individuals receiving care at BCC. Conclusions: BCC is impacting community health and improving access to quality care and substance use treatment. It is a great investment for our patients and families.Keywords: BCC, behvaioral health, community health care, addiction treatment
Procedia PDF Downloads 76245 Strategic Interventions to Address Health Workforce and Current Disease Trends, Nakuru, Kenya
Authors: Paul Moses Ndegwa, Teresia Kabucho, Lucy Wanjiru, Esther Wanjiru, Brian Githaiga, Jecinta Wambui
Abstract:
Health outcome has improved in the country since 2013 following the adoption of the new constitution in Kenya with devolved governance with administration and health planning functions transferred to county governments. 2018-2022 development agenda prioritized universal healthcare coverage, food security, and nutrition, however, the emergence of Covid-19 and the increase of non-communicable diseases pose a challenge and constrain in an already overwhelmed health system. A study was conducted July-November 2021 to establish key challenges in achieving universal healthcare coverage within the county and best practices for improved non-communicable disease control. 14 health workers ranging from nurses, doctors, public health officers, clinical officers, and pharmaceutical technologists were purposely engaged to provide critical information through questionnaires by a trained duo observing ethical procedures on confidentiality. Data analysis. Communicable diseases are major causes of morbidity and mortality. Non-communicable diseases contribute to approximately 39% of deaths. More than 45% of the population does not have access to safe drinking water. Study noted geographic inequality with respect to distribution and use of health resources including competing non-health priorities. 56% of health workers are nurses, 13% clinical officers, 7% doctors, 9%public health workers, 2% are pharmaceutical technologists. Poor-quality data limits the validity of disease-burdened estimates and research activities. Risk factors include unsafe water, sanitation, hand washing, unsafe sex, and malnutrition. Key challenge in achieving universal healthcare coverage is the rise in the relative contribution of non-communicable diseases. Improve targeted disease control with effective and equitable resource allocation. Develop high infectious disease control mechanisms. Improvement of quality data for decision making. Strengthen electronic data-capture systems. Increase investments in the health workforce to improve health service provision and achievement of universal health coverage. Create a favorable environment to retain health workers. Fill in staffing gaps resulting in shortages of doctors (7%). Develop a multi-sectional approach to health workforce planning and management. Need to invest in mechanisms that generate contextual evidence on current and future health workforce needs. Ensure retention of qualified, skilled, and motivated health workforce. Deliver integrated people-centered health services.Keywords: multi-sectional approach, equity, people-centered, health workforce retention
Procedia PDF Downloads 113244 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 80243 Global-Scale Evaluation of Two Satellite-Based Passive Microwave Soil Moisture Data Sets (SMOS and AMSR-E) with Respect to Modelled Estimates
Authors: A. Alyaaria, b, J. P. Wignerona, A. Ducharneb, Y. Kerrc, P. de Rosnayd, R. de Jeue, A. Govinda, A. Al Bitarc, C. Albergeld, J. Sabaterd, C. Moisya, P. Richaumec, A. Mialonc
Abstract:
Global Level-3 surface soil moisture (SSM) maps from the passive microwave soil moisture and Ocean Salinity satellite (SMOSL3) have been released. To further improve the Level-3 retrieval algorithm, evaluation of the accuracy of the spatio-temporal variability of the SMOS Level 3 products (referred to here as SMOSL3) is necessary. In this study, a comparative analysis of SMOSL3 with a SSM product derived from the observations of the Advanced Microwave Scanning Radiometer (AMSR-E) computed by implementing the Land Parameter Retrieval Model (LPRM) algorithm, referred to here as AMSRM, is presented. The comparison of both products (SMSL3 and AMSRM) were made against SSM products produced by a numerical weather prediction system (SM-DAS-2) at ECMWF (European Centre for Medium-Range Weather Forecasts) for the 03/2010-09/2011 period at global scale. The latter product was considered here a 'reference' product for the inter-comparison of the SMOSL3 and AMSRM products. Three statistical criteria were used for the evaluation, the correlation coefficient (R), the root-mean-squared difference (RMSD), and the bias. Global maps of these criteria were computed, taking into account vegetation information in terms of biome types and Leaf Area Index (LAI). We found that both the SMOSL3 and AMSRM products captured well the spatio-temporal variability of the SM-DAS-2 SSM products in most of the biomes. In general, the AMSRM products overestimated (i.e., wet bias) while the SMOSL3 products underestimated (i.e., dry bias) SSM in comparison to the SM-DAS-2 SSM products. In term of correlation values, the SMOSL3 products were found to better capture the SSM temporal dynamics in highly vegetated biomes ('Tropical humid', 'Temperate Humid', etc.) while best results for AMSRM were obtained over arid and semi-arid biomes ('Desert temperate', 'Desert tropical', etc.). When removing the seasonal cycles in the SSM time variations to compute anomaly values, better correlation with the SM-DAS-2 SSM anomalies were obtained with SMOSL3 than with AMSRM, in most of the biomes with the exception of desert regions. Eventually, we showed that the accuracy of the remotely sensed SSM products is strongly related to LAI. Both the SMOSL3 and AMSRM (slightly better) SSM products correlate well with the SM-DAS2 products over regions with sparse vegetation for values of LAI < 1 (these regions represent almost 50% of the pixels considered in this global study). In regions where LAI>1, SMOSL3 outperformed AMSRM with respect to SM-DAS-2: SMOSL3 had almost consistent performances up to LAI = 6, whereas AMSRM performance deteriorated rapidly with increasing values of LAI.Keywords: remote sensing, microwave, soil moisture, AMSR-E, SMOS
Procedia PDF Downloads 357242 Working Memory and Audio-Motor Synchronization in Children with Different Degrees of Central Nervous System's Lesions
Authors: Anastasia V. Kovaleva, Alena A. Ryabova, Vladimir N. Kasatkin
Abstract:
Background: The most simple form of entrainment to a sensory (typically auditory) rhythmic stimulus involves perceiving and synchronizing movements with an isochronous beat with one level of periodicity, such as that produced by a metronome. Children with pediatric cancer usually treated with chemo- and radiotherapy. Because of such treatment, psychologists and health professionals declare cognitive and motor abilities decline in cancer patients. The purpose of our study was to measure working memory characteristics with association with audio-motor synchronization tasks, also involved some memory resources, in children with different degrees of central nervous system lesions: posterior fossa tumors, acute lymphoblastic leukemia, and healthy controls. Methods: Our sample consisted of three groups of children: children treated for posterior fossa tumors (PFT-group, n=42, mean age 12.23), children treated for acute lymphoblastic leukemia (ALL-group, n=11, mean age 11.57) and neurologically healthy children (control group, n=36, mean age 11.67). Participants were tested for working memory characteristics with Cambridge Neuropsychological Test Automated Battery (CANTAB). Pattern recognition memory (PRM) and spatial working memory (SWM) tests were applied. Outcome measures of PRM test include the number and percentage of correct trials and latency (speed of participant’s response), and measures of SWM include errors, strategy, and latency. In the synchronization tests, the instruction was to tap out a regular beat (40, 60, 90 and 120 beats per minute) in synchrony with the rhythmic sequences that were played. This meant that for the sequences with an isochronous beat, participants were required to tap into every auditory event. Variations of inter-tap-intervals and deviations of children’s taps from the metronome were assessed. Results: Analysis of variance revealed the significant effect of group (ALL, PFT and control) on such parameters as short-term PRM, SWM strategy and errors. Healthy controls demonstrated more correctly retained elements, better working memory strategy, compared to cancer patients. Interestingly that ALL patients chose the bad strategy, but committed significantly less errors in SWM test then PFT and controls did. As to rhythmic ability, significant associations of working memory were found out only with 40 bpm rhythm: the less variable were inter-tap-intervals of the child, the more elements in memory he/she could retain. The ability to audio-motor synchronization may be related to working memory processes mediated by the prefrontal cortex whereby each sensory event is actively retrieved and monitored during rhythmic sequencing. Conclusion: Our results suggest that working memory, tested with appropriate cognitive methods, is associated with the ability to synchronize movements with rhythmic sounds, especially in sub-second intervals (40 per minute).Keywords: acute lymphoblastic leukemia (ALL), audio-motor synchronization, posterior fossa tumor, working memory
Procedia PDF Downloads 300241 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study
Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria
Abstract:
Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations
Procedia PDF Downloads 117240 Aligning Informatics Study Programs with Occupational and Qualifications Standards
Authors: Patrizia Poscic, Sanja Candrlic, Danijela Jaksic
Abstract:
The University of Rijeka, Department of Informatics participated in the Stand4Info project, co-financed by the European Union, with the main idea of an alignment of study programs with occupational and qualifications standards in the field of Informatics. A brief overview of our research methodology, goals and deliverables is shown. Our main research and project objectives were: a) development of occupational standards, qualification standards and study programs based on the Croatian Qualifications Framework (CROQF), b) higher education quality improvement in the field of information and communication sciences, c) increasing the employability of students of information and communication technology (ICT) and science, and d) continuously improving competencies of teachers in accordance with the principles of CROQF. CROQF is a reform instrument in the Republic of Croatia for regulating the system of qualifications at all levels through qualifications standards based on learning outcomes and following the needs of the labor market, individuals and society. The central elements of CROQF are learning outcomes - competences acquired by the individual through the learning process and proved afterward. The place of each acquired qualification is set by the level of the learning outcomes belonging to that qualification. The placement of qualifications at respective levels allows the comparison and linking of different qualifications, as well as linking of Croatian qualifications' levels to the levels of the European Qualifications Framework and the levels of the Qualifications framework of the European Higher Education Area. This research has made 3 proposals of occupational standards for undergraduate study level (System Analyst, Developer, ICT Operations Manager), and 2 for graduate (master) level (System Architect, Business Architect). For each occupational standard employers have provided a list of key tasks and associated competencies necessary to perform them. A set of competencies required for each particular job in the workplace was defined and each set of competencies as described in more details by its individual competencies. Based on sets of competencies from occupational standards, sets of learning outcomes were defined and competencies from the occupational standard were linked with learning outcomes. For each learning outcome, as well as for the set of learning outcomes, it was necessary to specify verification method, material, and human resources. The task of the project was to suggest revision and improvement of the existing study programs. It was necessary to analyze existing programs and determine how they meet and fulfill defined learning outcomes. This way, one could see: a) which learning outcomes from the qualifications standards are covered by existing courses, b) which learning outcomes have yet to be covered, c) are they covered by mandatory or elective courses, and d) are some courses unnecessary or redundant. Overall, the main research results are: a) completed proposals of qualification and occupational standards in the field of ICT, b) revised curricula of undergraduate and master study programs in ICT, c) sustainable partnership and association stakeholders network, d) knowledge network - informing the public and stakeholders (teachers, students, and employers) about the importance of CROQF establishment, and e) teachers educated in innovative methods of teaching.Keywords: study program, qualification standard, occupational standard, higher education, informatics and computer science
Procedia PDF Downloads 143239 Redefining Surgical Innovation in Urology: A Historical Perspective of the Original Publications on Pioneering Techniques in Urology
Authors: Samuel Sii, David Homewood, Brendan Dittmer, Tony Nzembela, Jonathan O’Brien, Niall Corcoran, Dinesh Agarwal
Abstract:
Introduction: Innovation is key to the advancement of medicine and improvement in patient care. This is particularly true in surgery, where pioneering techniques have transformed operative management from a historically highly risky peri-morbid and disfiguring to the contemporary low-risk, sterile and minimally invasive treatment modality. There is a delicate balance between enabling innovation and minimizing patient harm. Publication and discussion of novel surgical techniques allow for independent expert review. Recent journals have increasingly stringent requirements for publications and often require larger case volumes for novel techniques to be published. This potentially impairs the initial publication of novel techniques and slows innovation. The historical perspective provides a better understanding of how requirements for the publication of new techniques have evolved over time. This is essential in overcoming challenges in developing novel techniques. Aims and Objectives: We explore how novel techniques in Urology have been published over the past 200 years. Our objective is to describe the trend and publication requirements of novel urological techniques, both historical and present. Methods: We assessed all major urological operations using multipronged historical analysis. An initial literature search was carried out through PubMed and Google Scholar for original literature descriptions, followed by reference tracing. The first publication of each pioneering urological procedure was recorded. Data collected includes the year of publication, description of the procedure, number of cases and outcomes. Results: 65 papers describing pioneering techniques in Urology were identified. These comprised of 2 experimental studies, 17 case reports and 46 case series. These papers described various pioneering urological techniques in urological oncology, reconstructive urology and endourology. We found that, historically, techniques were published with smaller case numbers. Often, the surgical technique itself was a greater focus of the publication than patient outcome data. These techniques were often adopted prior to larger publications. In contrast, the risks and benefits of recent novel techniques are often well-defined prior to adoption. This historical perspective is important as recent journals have requirements for larger case series and data outcomes. This potentially impairs the initial publication of novel techniques and slows innovation. Conclusion: A better understanding of historical publications and their effect on the adoption of urological techniques into common practice could assist the current generation of Urologists in formulating a safe, efficacious process in promoting surgical innovation and the development of novel surgical techniques. We propose the reassessment of requirements for the publication of novel operative techniques by splitting technical perspectives and data-orientated case series. Existing frameworks such as IDEAL and ASERNIP-S should be integrated into current processes when investigating and developing new surgical techniques to ensure efficacious and safe innovation within surgery is encouraged.Keywords: urology, surgical innovation, novel surgical techniques, publications
Procedia PDF Downloads 49238 Feasibility and Acceptability of Mindfulness-Based Cognitive Therapy in People with Depression and Cardiovascular Disorders: A Feasibility Randomised Controlled Trial
Authors: Modi Alsubaie, Chris Dickens, Barnaby Dunn, Andy Gibson, Obioha Ukoumunned, Alison Evans, Rachael Vicary, Manish Gandhi, Willem Kuyken
Abstract:
Background: Depression co-occurs in 20% of people with cardiovascular disorders, can persist for years and predicts worse physical health outcomes. While psychosocial treatments have been shown to effectively treat acute depression in those with comorbid cardiovascular disorders, to date there has been no evaluation of approaches aiming to prevent relapse and treat residual depression symptoms in this group. Therefore, the current study aimed to examine the feasibility and acceptability of a randomised controlled trial design evaluating an adapted version of mindfulness-based cognitive therapy (MBCT) designed specifically for people with co-morbid depression and cardiovascular disorders. Methods: A 3-arm feasibility randomised controlled trial was conducted, comparing MBCT adapted for people with cardiovascular disorders plus treatment as usual (TAU), mindfulness-based stress reduction (MBSR) plus TAU, and TAU alone. Participants completed a set of self-report measures of depression severity, anxiety, quality of life, illness perceptions, mindfulness, self-compassion and affect and had their blood pressure taken immediately before, immediately after, and three months following the intervention. Those in the adapted-MBCT arm additionally underwent a qualitative interview to gather their views about the adapted intervention. Results: 3400 potentially eligible participants were approached when attending an outpatient appointment at a cardiology clinic or via a GP letter following a case note search. 242 (7.1%) were interested in taking part, 59 (1.7%) were screened as being suitable, and 33 (<1%) were eventually randomised to the three groups. The sample was heterogeneous in terms of whether they reported current depression or had a history of depression and the time since the onset of cardiovascular disease (one to 25 years). Of 11 participants randomised to adapted MBCT seven completed the full course, levels of home mindfulness practice were high, and positive qualitative feedback about the intervention was given. Twenty-nine out of 33 participants randomised completed all the assessment measures at all three-time points. With regards to the primary outcome (depression), five out of the seven people who completed the adapted MBCT and three out of five under MBSR showed significant clinical change, while in TAU no one showed any clinical change at the three-month follow-up. Conclusions: The adapted MBCT intervention was feasible and acceptable to participants. However, aspects of the trial design were not feasible. In particular, low recruitment rates were achieved, and there was a high withdrawal rate between screening and randomisation. Moreover, the heterogeneity in the sample was high meaning the adapted intervention was unlikely to be well tailored to all participants needs. This suggests that if the decision is made to move to a definitive trial, study recruitment procedures will need to be revised to more successfully recruit a target sample that optimally matches the adapted intervention.Keywords: mindfulness-based cognitive therapy (MBCT), depression, cardiovascular disorders, feasibility, acceptability
Procedia PDF Downloads 218237 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques
Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo
Abstract:
Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.Keywords: air pollution, air quality modelling, data mining, particulate matter
Procedia PDF Downloads 258236 The Solid-Phase Sensor Systems for Fluorescent and SERS-Recognition of Neurotransmitters for Their Visualization and Determination in Biomaterials
Authors: Irina Veselova, Maria Makedonskaya, Olga Eremina, Alexandr Sidorov, Eugene Goodilin, Tatyana Shekhovtsova
Abstract:
Such catecholamines as dopamine, norepinephrine, and epinephrine are the principal neurotransmitters in the sympathetic nervous system. Catecholamines and their metabolites are considered to be important markers of socially significant diseases such as atherosclerosis, diabetes, coronary heart disease, carcinogenesis, Alzheimer's and Parkinson's diseases. Currently, neurotransmitters can be studied via electrochemical and chromatographic techniques that allow their characterizing and quantification, although these techniques can only provide crude spatial information. Besides, the difficulty of catecholamine determination in biological materials is associated with their low normal concentrations (~ 1 nM) in biomaterials, which may become even one more order lower because of some disorders. In addition, in blood they are rapidly oxidized by monoaminooxidases from thrombocytes and, for this reason, the determination of neurotransmitter metabolism indicators in an organism should be very rapid (15—30 min), especially in critical states. Unfortunately, modern instrumental analysis does not offer a complex solution of this problem: despite its high sensitivity and selectivity, HPLC-MS cannot provide sufficiently rapid analysis, while enzymatic biosensors and immunoassays for the determination of the considered analytes lack sufficient sensitivity and reproducibility. Fluorescent and SERS-sensors remain a compelling technology for approaching the general problem of selective neurotransmitter detection. In recent years, a number of catecholamine sensors have been reported including RNA aptamers, fluorescent ribonucleopeptide (RNP) complexes, and boronic acid based synthetic receptors and the sensor operated in a turn-off mode. In this work we present the fluorescent and SERS turn-on sensor systems based on the bio- or chemorecognizing nanostructured films {chitosan/collagen-Tb/Eu/Cu-nanoparticles-indicator reagents} that provide the selective recognition, visualization, and sensing of the above mentioned catecholamines on the level of nanomolar concentrations in biomaterials (cell cultures, tissue etc.). We have (1) developed optically transparent porous films and gels of chitosan/collagen; (2) ensured functionalization of the surface by molecules-'recognizers' (by impregnation and immobilization of components of the indicator systems: biorecognizing and auxiliary reagents); (3) performed computer simulation for theoretical prediction and interpretation of some properties of the developed materials and obtained analytical signals in biomaterials. We are grateful for the financial support of this research from Russian Foundation for Basic Research (grants no. 15-03-05064 a, and 15-29-01330 ofi_m).Keywords: biomaterials, fluorescent and SERS-recognition, neurotransmitters, solid-phase turn-on sensor system
Procedia PDF Downloads 406235 Knowledge Management and Administrative Effectiveness of Non-teaching Staff in Federal Universities in the South-West, Nigeria
Authors: Nathaniel Oladimeji Dixon, Adekemi Dorcas Fadun
Abstract:
Educational managers have observed a downward trend in the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. This is evident in the low-quality service delivery of administrators and unaccomplished institutional goals and missions of higher education. Scholars have thus indicated the need for the deployment and adoption of a practice that encourages information collection and sharing among stakeholders with a view to improving service delivery and outcomes. This study examined the extent to which knowledge management correlated with the administrative effectiveness of non-teaching staff in federal universities in South-west Nigeria. The study adopted the survey design. Three federal universities (the University of Ibadan, Federal University of Agriculture, Abeokuta, and Obafemi Awolowo University) were purposively selected because administrative ineffectiveness was more pronounced among non-teaching staff in government-owned universities, and these federal universities were long established. The proportional and stratified random sampling was adopted to select 1156 non-teaching staff across the three universities along the three existing layers of the non-teaching staff: secretarial (senior=311; junior=224), non-secretarial (senior=147; junior=241) and technicians (senior=130; junior=103). Knowledge Management Practices Questionnaire with four sub-scales: knowledge creation (α=0.72), knowledge utilization (α=0.76), knowledge sharing (α=0.79) and knowledge transfer (α=0.83); and Administrative Effectiveness Questionnaire with four sub-scales: communication (α=0.84), decision implementation (α=0.75), service delivery (α=0.81) and interpersonal relationship (α=0.78) were used for data collection. Data were analyzed using descriptive statistics, Pearson product-moment correlation and multiple regression at 0.05 level of significance, while qualitative data were content analyzed. About 59.8% of the non-teaching staff exhibited a low level of knowledge management. The indices of administrative effectiveness of non-teaching staff were rated as follows: service delivery (82.0%), communication (78.0%), decision implementation (71.0%) and interpersonal relationship (68.0%). Knowledge management had significant relationships with the indices of administrative effectiveness: service delivery (r=0.82), communication (r=0.81), decision implementation (r=0.80) and interpersonal relationship (r=0.47). Knowledge management had a significant joint prediction on administrative effectiveness (F (4;1151)= 0.79, R=0.86), accounting for 73.0% of its variance. Knowledge sharing (β=0.38), knowledge transfer (β=0.26), knowledge utilization (β=0.22), and knowledge creation (β=0.06) had relatively significant contributions to administrative effectiveness. Lack of team spirit and withdrawal syndrome is the major perceived constraints to knowledge management practices among the non-teaching staff. Knowledge management positively influenced the administrative effectiveness of the non-teaching staff in federal universities in South-west Nigeria. There is a need to ensure that the non-teaching staff imbibe team spirit and embrace teamwork with a view to eliminating their withdrawal syndromes. Besides, knowledge management practices should be deployed into the administrative procedures of the university system.Keywords: knowledge management, administrative effectiveness of non-teaching staff, federal universities in the south-west of nigeria., knowledge creation, knowledge utilization, effective communication, decision implementation
Procedia PDF Downloads 102234 Seasonal Variability of Picoeukaryotes Community Structure Under Coastal Environmental Disturbances
Authors: Benjamin Glasner, Carlos Henriquez, Fernando Alfaro, Nicole Trefault, Santiago Andrade, Rodrigo De La Iglesia
Abstract:
A central question in ecology refers to the relative importance that local-scale variables have over community composition, when compared with regional-scale variables. In coastal environments, strong seasonal abiotic influence dominates these systems, weakening the impact of other parameters like micronutrients. After the industrial revolution, micronutrients like trace metals have increased in ocean as pollutants, with strong effects upon biotic entities and biological processes in coastal regions. Coastal picoplankton communities had been characterized as a cyanobacterial dominated fraction, but in recent years the eukaryotic component of this size fraction has gained relevance due to their high influence in carbon cycle, although, diversity patterns and responses to disturbances are poorly understood. South Pacific upwelling coastal environments represent an excellent model to study seasonal changes due to a strong influence in the availability of macro- and micronutrients between seasons. In addition, some well constrained coastal bays of this region have been subjected to strong disturbances due to trace metal inputs. In this study, we aim to compare the influence of seasonality and trace metals concentrations, on the community structure of planktonic picoeukaryotes. To describe seasonal patterns in the study area, satellite data in a 6 years time series and in-situ measurements with a traditional oceanographic approach such as CTDO equipment were performed. In addition, trace metal concentrations were analyzed trough ICP-MS analysis, for the same region. For biological data collection, field campaigns were performed in 2011-2012 and the picoplankton community was described by flow cytometry and taxonomical characterization with next-generation sequencing of ribosomal genes. The relation between the abiotic and biotic components was finally determined by multivariate statistical analysis. Our data show strong seasonal fluctuations in abiotic parameters such as photosynthetic active radiation and superficial sea temperature, with a clear differentiation of seasons. However, trace metal analysis allows identifying strong differentiation within the study area, dividing it into two zones based on trace metals concentration. Biological data indicate that there are no major changes in diversity but a significant fluctuation in evenness and community structure. These changes are related mainly with regional parameters, like temperature, but by analyzing the metal influence in picoplankton community structure, we identify a differential response of some plankton taxa to metal pollution. We propose that some picoeukaryotic plankton groups respond differentially to metal inputs, by changing their nutritional status and/or requirements under disturbances as a derived outcome of toxic effects and tolerance.Keywords: Picoeukaryotes, plankton communities, trace metals, seasonal patterns
Procedia PDF Downloads 173233 The Impact of a Leadership Change on Individuals' Behaviour and Incentives: Evidence from the Top Tier Italian Football League
Authors: Kaori Narita, Juan de Dios Tena Horrillo, Claudio Detotto
Abstract:
Decisions on replacement of leaders are of significance and high prevalence in any organization, and concerns many of its stakeholders, whether it is a leader in a political party or a CEO of a firm, as indicated by high media coverage of such events. This merits an investigation into the consequences and implications of a leadership change on the performances and behavior of organizations and their workers. Sport economics provides a fruitful field to explore these issues due to the high frequencies of managerial changes in professional sports clubs and the transparency and regularity of observations of team performance and players’ abilities. Much of the existing research on managerial change focuses on how this affects the performance of an organization. However, there is scarcely attention paid to the consequences of such events on the behavior of individuals within the organization. Changes in behavior and attitudes of a group of workers due to a managerial change could be of great interest in management science, psychology, and operational research. On the other hand, these changes cannot be observed in the final outcome of the organization, as this is affected by many other unobserved shocks, for example, the stress level of workers with the need to deal with a difficult situation. To fill this gap, this study shows the first attempt to evaluate the impact of managerial change on players’ behaviors such as attack intensity, aggressiveness, and efforts. The data used in this study is from the top tier Italian football league (“Serie A”), where an average of 13 within season replacements of head coaches were observed over the period of seasons from 2000/2001 to 2017/18. The preliminary estimation employs Pooled Ordinary Least Square (POLS) and club-season Fixed Effect (FE) in order to assess the marginal effect of having a new manager on the number of shots, corners and red/yellow cards after controlling for a home-field advantage, ex ante abilities and current positions in the league of a team and their opponent. The results from this preliminary estimation suggest that the teams do not show a significant difference in their behaviors before and after the managerial change. To build on these preliminary results, other methods, including propensity score matching and non-linear model estimates, will be used. Moreover, the study will further investigate these issues by considering other measurements of attack intensity, aggressiveness, and efforts, such as possessions, a number of fouls and the athletic performance of players, respectively. Finally, the study is going to investigate whether these results vary over the characteristics of a new head coach, for example, their age and experience as a manager and a player. Thus far, this study suggests that certain behaviours of individuals in an organisation are not immediately affected by a change in leadership. To confirm this preliminary finding and lead to a more solid conclusion, further investigation will be conducted in the aforementioned manner, and the results will be elaborated in the conference.Keywords: behaviour, effort, manager characteristics, managerial change, sport economics
Procedia PDF Downloads 134232 Development and Preliminary Testing of the Dutch Version of the Program for the Education and Enrichment of Relational Skills
Authors: Sakinah Idris, Gabrine Jagersma, Bjorn Jaime Van Pelt, Kirstin Greaves-Lord
Abstract:
Background: The PEERS (Program for the Education and Enrichment of Relational Skills) intervention can be considered a well-established, evidence-based intervention in the USA. However, testing the efficacy of cultural adaptations of PEERS is still ongoing. More and more, the involvement of all stakeholders in the development and evaluation of interventions is acknowledged as crucial for the longer term implementation of interventions across settings. Therefore, in the current project, teens with ASD (Autism Spectrum Disorder), their neurotypical peers, parents, teachers, as well as clinicians were involved in the development and evaluation of the Dutch version of PEERS. Objectives: The current presentation covers (1) the formative phase and (2) the preliminary adaptation test phase of the cultural adaptation of evidence-based interventions. In the formative phase, we aim to describe the process of adaptation of the PEERS program to the Dutch culture and care system. In the preliminary adaptation phase, we will present results from the preliminary adaptation test among 32 adolescents with ASD. Methods: In phase 1, a group discussion on common vocabulary was conducted among 70 teenagers (and their teachers) from special and regular education aged 12-18 years old. This inventory concerned 14 key constructs from PEERS, e.g., areas of interests, locations for making friends, common peer groups and crowds inside and outside of school, activities with friends, commonly used ways for electronic communication, ways for handling disagreements, and common teasing comebacks. Also, 15 clinicians were involved in the translation and cultural adaptation process. The translation and cultural adaptation process were guided by the research team, and who included input and feedback from all stakeholders through an iterative feedback incorporation procedure. In phase 2, The parent-reported Social Responsiveness Scale (SRS), the Test of Adolescent Social Skills Knowledge (TASSK), and the Quality of Socialization Questionnaire (QSQ) were assessed pre- and post-intervention to evaluate potential treatment outcome. Results: The most striking cultural adaptation - reflecting the standpoints of all stakeholders - concerned the strategies for handling rumors and gossip, which were suggested to be taught using a similar approach as the teasing comebacks, more in line with ‘down-to-earth’ Dutch standards. The preliminary testing of this adapted version indicated that the adolescents with ASD significantly improved their social knowledge (TASSK; t₃₁ = -10.9, p < .01), social experience (QSQ-Parent; t₃₁ = -4.2, p < .01 and QSQ-Adolescent; t₃₂ = -3.8, p < .01), and in parent-reported social responsiveness (SRS; t₃₃ = 3.9, p < .01). In addition, subjective evaluations of teens with ASD, their parents and clinicians were positive. Conclusions: In order to further scrutinize the effectiveness of the Dutch version of the PEERS intervention, we recommended performing a larger scale randomized control trial (RCT) design, for which we provide several methodological considerations.Keywords: cultural adaptation, PEERS, preliminary testing, translation
Procedia PDF Downloads 168231 Selfie: Redefining Culture of Narcissism
Authors: Junali Deka
Abstract:
“Pictures speak more than a thousand words”. It is the power of image which can have multiple meanings the way it is read by the viewers. This research article is an outcome of the extensive study of the phenomenon of‘selfie culture’ and dire need of self-constructed virtual identity among youths. In the recent times, there has been a revolutionary change in the concept of photography in terms of both techniques and applications. The popularity of ‘self-portraits’ mainly depend on the temporal space and time created on social networking sites like Facebook, Instagram. With reference to Stuart’s Hall encoding and decoding process, the article studies the behavior of the users who post photographs online. The photographic messages (Roland Barthes) are interpreted differently by different viewers. The notion of ‘self’, ‘self-love and practice of looking (Marita Sturken) and ways of seeing (John Berger) got new definition and dimensional together. After Oscars Night, show host Ellen DeGeneres’s selfie created the most buzz and hype in the social media. The term was judged the word of 2013, and has earned its place in the dictionary. “In November 2013, the word "selfie" was announced as being the "word of the year" by the Oxford English Dictionary. By the end of 2012, Time magazine considered selfie one of the "top 10 buzzwords" of that year; although selfies had existed long before, it was in 2012 that the term "really hit the big time an Australian origin. The present study was carried to understand the concept of ‘selfie-bug’ and the phenomenon it has created among youth (especially students) at large in developing a pseudo-image of its own. The topic was relevant and gave a platform to discuss about the cultural, psychological and sociological implications of selfie in the age of digital technology. At the first level, content analysis of the primary and secondary sources including newspapers articles and online resources was carried out followed by a small online survey conducted with the help of questionnaire to find out the student’s view on selfie and its social and psychological effects. The newspapers reports and online resources confirmed that selfie is a new trend in the digital media and it has redefined the notion of beauty and self-love. The Facebook and Instagram are the major platforms used to express one-self and creation of virtual identity. The findings clearly reflected the active participation of female students in comparison to male students. The study of the photographs of few selected respondents revealed the difference of attitude and image building among male and female users. The study underlines some basic questions about the desire of reconstruction of identity among young generation, such as - are they becoming culturally narcissist; responsible factors for cultural, social and moral changes in the society, psychological and technological effects caused by Smartphone as well, culminating into a big question mark whether the selfie is a social signifier of identity construction.Keywords: Culture, Narcissist, Photographs, Selfie
Procedia PDF Downloads 407230 Development of an Instrument Assessing Participants’ Motivation on Assigning Monetary Value to Quality of Life
Authors: Afentoula Mavrodi, Andreas Georgiou, Georgios Tsiotras, Vassilis Aletras
Abstract:
Placing a monetary value on a quality-adjusted-life-year (QALY) is of utmost importance in economic evaluation. Identifying the population’s preferences is critical in order to understand some of the reasons driving variations in the assigned monetary value. Yet, evidence of the motives behind value assignment to a QALY by the general public is limited. Developing an instrument that would capture the population’s motives could be proven valuable to policy-makers, to guide them in allocating different values to a QALY based on users’ motivations. The aim of this study was to identify the most relevant motives and develop an appropriate instrument to assess them. To design the instrument, we employed: a) the EQ-5D-3L tool to assess participants’ current health status, and b) the Willingness-to-Pay (WTP) approach, within the Contingent Valuation (CV) Method framework, to elicit the monetary value. Advancing the open-ended approach adopted to assess solely protest bidders’ motives; a variety of follow-up item-specific statements were designed (deductive approach), aiming to evaluate motives of both protest bidders and participants willing to pay for the hypothetical treatment under consideration. The initial design of the survey instrument was the outcome of an extensive literature review. This instrument was revised based on 15 semi-structured interviews that took place in September 2018 and a pilot study held during two months (October-November) in 2018. Individuals with different educational, occupational and economical backgrounds and adequate verbal skills were recruited to complete the semi-structured interviews. The follow-up motivation statements of both protest bidders and those willing to pay were revised and rephrased after the semi-structured interviews. In total 4 statements for protest bidders and 3 statements for those willing to pay for the treatment were chosen to be included in the survey tool. Using the CATI (Computer Assisted Telephone Interview) method, a randomly selected sample of 97 persons living in Thessaloniki, Greece, completed the questionnaire on two occasions over a period of 4 weeks. Based on pilot study results, a test-retest reliability assessment was performed using the intra-class correlation coefficient (ICC). All statements formulated for protest bidders showed acceptable reliability (ICC values of 0.84 (95% CI: 0.67, 0.92) and above). Similarly, all statements for those willing to pay for the treatment showed high reliability (ICC values of 0.86 (95% CI: 0.78, 0.91) and above). Overall, the instrument designed in this study was reliable with regards to the item-specific statements assessing participants’ motivation. Validation of the instrument will take place in a future study. For a holistic WTP per QALY instrument, participants’ motivation must be addressed broadly. The instrument developed in this study captured a variety of motives and provided insight with regards to the method through which the latter are evaluated. Last but not least, it extended motive assessment to all study participants and not only protest bidders.Keywords: contingent valuation method, instrument, motives, quality-adjusted life-year, willingness-to-pay
Procedia PDF Downloads 136229 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli
Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma
Abstract:
Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis
Procedia PDF Downloads 263228 Analyzing the Effects of a Psychological Intervention on Black Students’ Sense of Belonging in Physics and Math: Exploring Differential Impacts for Historically Black Colleges and Universities and Predominantly White Institutions
Authors: Terrell Strayhorn
Abstract:
The lack of diversity in science, technology, engineering, and mathematics (STEM) fields is a persistent and concerning issue. One contributing factor to the underrepresentation of minority groups in STEM fields is a lack of sense of belonging, which can lead to lower levels of academic engagement, motivation, and achievement. In particular, Black students have been shown to experience lower levels of sense of belonging in STEM compared to their white peers. This study aimed to explore the effects of a psychological intervention on Black students' sense of belonging in physics and math courses at historically Black colleges and universities (HBCUs) and predominantly white institutions (PWIs). The study used a randomized controlled trial design and included 305 Black undergraduate students enrolled in physics or math courses at HBCUs and PWIs in the United States. Participants were randomly assigned to either an intervention group or a control group. The intervention consisted of a brief psychological, video-based intervention designed to enhance sense of belonging, which was delivered in a single session. The control group received no intervention. The primary outcome measure was sense of belonging in physics and math courses, as assessed by a validated self-report measure. Other outcomes included academic engagement, motivation, and achievement as measured by physics and math (course) grades. Preliminary results show that the intervention has a significant positive effect on Black students' sense of belonging in physics and math courses, with a moderate effect size. The intervention also had a significant positive effect on academic engagement and motivation, but not on academic achievement. Importantly, the effects of the intervention were larger for Black students enrolled at PWIs compared to those enrolled at HBCUs. Findings, at present, suggest that a brief psychological web-based intervention can enhance Black students' sense of belonging in physics and math courses, and that the effects may be particularly strong for Black students enrolled at PWIs, although they are not negligible for Black students at HBCUs. This is an important finding given the persistent underrepresentation of Black students in STEM fields, the growing number of Black students at PWIs, and the potential for enhancing sense of belonging to improve academic outcomes and increase diversity in these fields. The study has several limitations, including a relatively small sample size and a lack of long-term follow-up. Future research could explore the generalizability of these findings to other minority groups and other STEM fields, as well as the potential for longer-term interventions to sustain and enhance the effects observed in this study. Overall, this study highlights the potential for psychological interventions to enhance sense of belonging and improve academic outcomes for Black students in STEM courses, and underscores the importance of addressing sense of belonging as a key factor in promoting diversity and equity in STEM fields.Keywords: sense of belonging, achievement, racial equity, postsecondary education, intervention
Procedia PDF Downloads 69227 The Concept of Path in Original Buddhism and the Concept of Psychotherapeutic Improvement
Authors: Beth Jacobs
Abstract:
The landmark movement of Western clinical psychology in the 20th century was the development of psychotherapy. The landmark movement of clinical psychology in the 21st century will be the absorption of meditation practices from Buddhist psychology. While millions of people explore meditation and related philosophy, very few people are exposed to the materials of original Buddhism on this topic, especially to the Theravadan Abhidharma. The Abhidharma is an intricate system of lists and matrixes that were used to understand and remember Buddha’s teaching. The Abhidharma delineates the first psychological system of Buddhism, how the mind works in the universe of reality and why meditation training strengthens and purifies the experience of life. Its lists outline the psychology of mental constructions, perception, emotion and cosmological causation. While the Abhidharma is technical, elaborate and complex, its essential purpose relates to the central purpose of clinical psychology: to relieve human suffering. Like Western depth psychology, the methodology rests on understanding underlying processes of consciousness and perception. What clinical psychologists might describe as therapeutic improvement, the Abhidharma delineates as a specific pathway of purified actions of consciousness. This paper discusses the concept of 'path' as presented in aspects of the Theravadan Abhidharma and relates this to current clinical psychological views of therapy outcomes and gains. The core path in Buddhism is the Eight-Fold Path, which is the fourth noble truth and the launching of activity toward liberation. The path is not composed of eight ordinal steps; it’s eight-fold and is described as opening the way, not funneling choices. The specific path in the Abhidharma is described in many steps of development of consciousness activities. The path is not something a human moves on, but something that moments of consciousness develop within. 'Cittas' are extensively described in the Abhidharma as the atomic-level unit of a raw action of consciousness touching upon an object in a field, and there are 121 types of cittas categorized. The cittas are embedded in the mental factors, which could be described as the psychological packaging elements of our experiences of consciousness. Based on these constellations of infinitesimal, linked occurrences of consciousness, citta are categorized by dimensions of purification. A path is a chain of citta developing through causes and conditions. There are no selves, no pronouns in the Abhidharma. Instead of me walking a path, this is about a person working with conditions to cultivate a stream of consciousness that is pure, immediate, direct and generous. The same effort, in very different terms, informs the work of most psychotherapies. Depth psychology seeks to release the bound, unconscious elements of mental process into the clarity of realization. Cognitive and behavioral psychologies work on breaking down automatic thought valuations and actions, changing schemas and interpersonal dynamics. Understanding how the original Buddhist concept of positive human development relates to the clinical psychological concept of therapy weaves together two brilliant systems of thought on the development of human well being.Keywords: Abhidharma, Buddhist path, clinical psychology, psychotherapeutic outcome
Procedia PDF Downloads 213226 Forests, the Sanctuaries to Specialist and Rare Wild Native Bees at the Foothills of Western Himalayas
Authors: Preeti Virkar, V. P. Uniyal, Vinod Kumar Bhatt
Abstract:
With 50% decline in managed honey bee hives in the continents of Europe and America, farmers and landscape managers are turning to native wild bees for their essential ecosystem services of pollination. Wild bees population are too under danger due to the rapid land use changes from anthropogenic activities. With an escalating population reaching 9.0 billion by 2050, human-induced land use changes are predicted to further deteriorate the habitats of numerous species by the turn of this century. The status of bees are uncertain, especially in the tropical regions of the world, which also questions the crisis of global pollinator decline and their essential services to wild and managed flora. Our investigation collectively compares wild native bee diversity and their status in forests and agroecosystems in Doon Valley landscape, situated at the foothills of Himalayan ranges, Uttarakhand, India. We seek to ask whether (1) natural habitat are refuge to richer and rarer bees communities than the agroecosystems, (2) Are agroecosystems closer to natural habitats similar to them than agroecosystems farther away; hence support richer bee communities and hence, (3) Do polyculture farms support richer bee communities than monoculture. The data was collected using observation and pantrap sampling form February to May, 2012 to 2014. We recorded 43 species of bees in Doon Valley. They belonged to 5 families; Megachilidae, Apidae, Andrenidae, Halictidae and Collitidae. A multinomial model approach was used to classify the bees into 2 habitats, in which forests demonstrated to support greater number of specialist (26%, n= 11) species than agroecosystems (7%, n= 3). The valley had many species categorized as the rare (58%, n= 25) and very few generalists (9%, n=4). A linear regression model run on our data demonstrated higher bee diversity in agro-ecosystems in close proximity to forests (H’ for < 200 m = 1.60) compared to those further away (H’ for > 600 m = 0.56) (R2=0.782, SE=0.148, p value=0.004). Organic agriculture supported significantly greater species richness in comparison to conventional farms (Mann-Whitney U test, n1 = 33, n2 = 35; P = 0.001). Forests ecosystems are refuge to rare specialist groups and support bee communities in nearby agroecosystems. The findings of our investigation demonstrate the importance of natural habitats as a potential refuge for rare native wild bee pollinators. Polyculture in the valley behaves similar to natural habitats and supports diverse bee communities in comparison to conventional monocultures. Our study suggests that the farming communities adopt diverse organic agriculture systems to attract wild pollinators beneficial for better crop production. Forests are sanctuaries for bees to nest, forage, and breed. Therefore, our outcome also suggests landscape managers not only preserve protected areas but also enhance the floral diversity in semi-natural and urban areas.Keywords: native bees, pollinators, polyculture, agroecosystem, natural habitat, diversity, monoculture, specialists, generalists
Procedia PDF Downloads 217225 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs
Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.
Abstract:
Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification
Procedia PDF Downloads 125