Search results for: identification procedure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4952

Search results for: identification procedure

272 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System

Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim

Abstract:

General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.

Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms

Procedia PDF Downloads 365
271 Modeling and Energy Analysis of Limestone Decomposition with Microwave Heating

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

The energy transition is spurred by structural changes in energy demand, supply, and prices. Microwave technology was first proposed as a faster alternative for cooking food. It was found that food heated instantly when interacting with high-frequency electromagnetic waves. The dielectric properties account for a material’s ability to absorb electromagnetic energy and dissipate this energy in the form of heat. Many energy-intense industries could benefit from electromagnetic heating since many of the raw materials are dielectric at high temperatures. Limestone sedimentary rock is a dielectric material intensively used in the cement industry to produce unslaked lime. A numerical 3D model was implemented in COMSOL Multiphysics to study the limestone continuous processing under microwave heating. The model solves the two-way coupling between the Energy equation and Maxwell’s equations as well as the coupling between heat transfer and chemical interfaces. Complementary, a controller was implemented to optimize the overall heating efficiency and control the numerical model stability. This was done by continuously matching the cavity impedance and predicting the required energy for the system, avoiding energy inefficiencies. This controller was developed in MATLAB and successfully fulfilled all these goals. The limestone load influence on thermal decomposition and overall process efficiency was the main object of this study. The procedure considered the Verification and Validation of the chemical kinetics model separately from the coupled model. The chemical model was found to correctly describe the chosen kinetic equation, and the coupled model successfully solved the equations describing the numerical model. The interaction between flow of material and electric field Poynting vector revealed to influence limestone decomposition, as a result from the low dielectric properties of limestone. The numerical model considered this effect and took advantage from this interaction. The model was demonstrated to be highly unstable when solving non-linear temperature distributions. Limestone has a dielectric loss response that increases with temperature and has low thermal conductivity. For this reason, limestone is prone to produce thermal runaway under electromagnetic heating, as well as numerical model instabilities. Five different scenarios were tested by considering a material fill ratio of 30%, 50%, 65%, 80%, and 100%. Simulating the tube rotation for mixing enhancement was proven to be beneficial and crucial for all loads considered. When uniform temperature distribution is accomplished, the electromagnetic field and material interaction is facilitated. The results pointed out the inefficient development of the electric field within the bed for 30% fill ratio. The thermal efficiency showed the propensity to stabilize around 90%for loads higher than 50%. The process accomplished a maximum microwave efficiency of 75% for the 80% fill ratio, sustaining that the tube has an optimal fill of material. Electric field peak detachment was observed for the case with 100% fill ratio, justifying the lower efficiencies compared to 80%. Microwave technology has been demonstrated to be an important ally for the decarbonization of the cement industry.

Keywords: CFD numerical simulations, efficiency optimization, electromagnetic heating, impedance matching, limestone continuous processing

Procedia PDF Downloads 147
270 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 54
269 Impact of Pharmacist-Led Care on Glycaemic Control in Patients with Type 2 Diabetes: A Randomised-Controlled Trial

Authors: Emmanuel A. David, Rebecca O. Soremekun, Roseline I. Aderemi-Williams

Abstract:

Background: The complexities involved in the management of diabetes mellitus require a multi-dimensional, multi-professional collaborative and continuous care by health care providers and a substantial self-care by the patients in order to achieve desired treatment outcomes. The effect of pharmacists’ care in the management of diabetes in resource-endowed nations is well documented in literature, but randomised-controlled assessment of the impact of pharmacist-led care among patients with diabetes in resource-limited settings like Nigeria and sub-Saharan Africa countries is scarce. Objective: To evaluate the impact of Pharmacist-led care on glycaemic control in patients with uncontrolled type 2 diabetes, using a randomised-controlled study design Methods: This study employed a prospective randomised controlled design, to assess the impact of pharmacist-led care on glycaemic control of 108 poorly controlled type 2 diabetic patients. A total of 200 clinically diagnosed type 2 diabetes patients were purposively selected using fasting blood glucose ≥ 7mmol/L and tested for long term glucose control using Glycated haemoglobin measure. One hundred and eight (108) patients with ≥ 7% Glycated haemoglobin were recruited for the study and assigned unique identification numbers. They were further randomly allocated to intervention and usual care groups using computer generated random numbers, with each group containing 54 subjects. Patients in the intervention group received pharmacist-structured intervention, including education, periodic phone calls, adherence counselling, referral and 6 months follow-up, while patients in usual care group only kept clinic appointments with their physicians. Data collected at baseline and six months included socio-demographic characteristics, fasting blood glucose, Glycated haemoglobin, blood pressure, lipid profile. With an intention to treat analysis, Mann-Whitney U test was used to compared median change from baseline in the primary outcome (Glycated haemoglobin) and secondary outcomes measure, effect size was computed and proportion of patients that reached target laboratory parameter were compared in both arms. Results: All enrolled participants (108) completed the study, 54 in each study. Mean age was 51±11.75 and majority were female (68.5%). Intervention patients had significant reduction in Glycated haemoglobin (-0.75%; P<0.001; η2 = 0.144), with greater proportion attaining target laboratory parameter after 6 months of care compared to usual care group (Glycated haemoglobin: 42.6% vs 20.8%; P=0.02). Furthermore, patients who received pharmacist-led care were about 3 times more likely to have better glucose control (AOR 2.718, 95%CI: 1.143-6.461) compared to usual care group. Conclusion: Pharmacist-led care significantly improved glucose control in patients with uncontrolled type 2 diabetes mellitus and should be integrated in the routine management of diabetes patients, especially in resource-limited settings.

Keywords: glycaemic control , pharmacist-led care, randomised-controlled trial , type 2 diabetes mellitus

Procedia PDF Downloads 96
268 Applying Program Theory-Driven Approach to Design and Evaluate a Teacher Professional Development Program

Authors: S. C. Lin, M. S. Wu

Abstract:

Japanese Scholar Manabu Sato has been advocating the Learning Community, which changed Japanese fundamental education during the last three decades. It was also called a “Quiet Revolution.” Manabu Sato criticized that traditional education only focused on individual competition, exams, teacher-centered instruction, and memorization. The students lacked leaning motivation. Therefore, Manabu Sato proclaimed that learning should be a sustainable process of “constantly weaving the relationship and the meanings” by having dialogues with learning materials, with peers, and with oneself. For a long time, secondary school education in Taiwan has been focused on exams and emphasized reciting and memorizing. The incident of “giving up learning” happened to some students. Manabu Sato’s learning community program has been implemented very successfully in Japan. It is worth exploring if learning community can resolve the issue of “Escape from learning” phenomenon among secondary school students in Taiwan. This study was the first year of a two-year project. This project applied a program theory-driven approach to evaluating the impact of teachers’ professional development interventions on students’ learning by using a mix of methods, qualitative inquiry, and quasi-experimental design. The current study was to show the results of using the method of theory-driven approach to program planning to design and evaluate a teachers’ professional development program (TPDP). The Manabu Sato’s learning community theory was applied to structure all components of a 54-hour workshop. The participants consisted of seven secondary school science teachers from two schools. The research procedure was comprised of: 1) Defining the problem and assessing participants’ needs; 2) Selecting the Theoretical Framework; 3) Determining theory-based goals and objectives; 4) Designing the TPDP intervention; 5) Implementing the TPDP intervention; 6) Evaluating the TPDP intervention. Data was collected from a number of different sources, including TPDP checklist, activity responses of workshop, LC subject matter test, teachers’ e-portfolio, course design documents, and teachers’ belief survey. The major findings indicated that program design was suitable to participants. More than 70% of the participants were satisfied with program implementation. They revealed that TPDP was beneficial to their instruction and promoted their professional capacities. However, due to heavy teaching loadings during the project some participants were unable to attend all workshops. To resolve this problem, the author provided options to them by watching DVD or reading articles offered by the research team. This study also established a communication platform for participants to share their thoughts and learning experiences. The TPDP had marked impacts on participants’ teaching beliefs. They believe that learning should be a sustainable process of “constantly weaving the relationship and the meanings” by having dialogues with learning materials, with peers, and with oneself. Having learned from TPDP, they applied a “learner-centered” approach and instructional strategies to design their courses, such as learning by doing, collaborative learning, and reflective learning. To conclude, participants’ beliefs, knowledge, and skills were promoted by the program instructions.

Keywords: program theory-driven approach, learning community, teacher professional development program, program evaluation

Procedia PDF Downloads 292
267 Detection, Analysis and Determination of the Origin of Copy Number Variants (CNVs) in Intellectual Disability/Developmental Delay (ID/DD) Patients and Autistic Spectrum Disorders (ASD) Patients by Molecular and Cytogenetic Methods

Authors: Pavlina Capkova, Josef Srovnal, Vera Becvarova, Marie Trkova, Zuzana Capkova, Andrea Stefekova, Vaclava Curtisova, Alena Santava, Sarka Vejvalkova, Katerina Adamova, Radek Vodicka

Abstract:

ASDs are heterogeneous and complex developmental diseases with a significant genetic background. Recurrent CNVs are known to be a frequent cause of ASD. These CNVs can have, however, a variable expressivity which results in a spectrum of phenotypes from asymptomatic to ID/DD/ASD. ASD is associated with ID in ~75% individuals. Various platforms are used to detect pathogenic mutations in the genome of these patients. The performed study is focused on a determination of the frequency of pathogenic mutations in a group of ASD patients and a group of ID/DD patients using various strategies along with a comparison of their detection rate. The possible role of the origin of these mutations in aetiology of ASD was assessed. The study included 35 individuals with ASD and 68 individuals with ID/DD (64 males and 39 females in total), who underwent rigorous genetic, neurological and psychological examinations. Screening for pathogenic mutations involved karyotyping, screening for FMR1 mutations and for metabolic disorders, a targeted MLPA test with probe mixes Telomeres 3 and 5, Microdeletion 1 and 2, Autism 1, MRX and a chromosomal microarray analysis (CMA) (Illumina or Affymetrix). Chromosomal aberrations were revealed in 7 (1 in the ASD group) individuals by karyotyping. FMR1 mutations were discovered in 3 (1 in the ASD group) individuals. The detection rate of pathogenic mutations in ASD patients with a normal karyotype was 15.15% by MLPA and CMA. The frequencies of the pathogenic mutations were 25.0% by MLPA and 35.0% by CMA in ID/DD patients with a normal karyotype. CNVs inherited from asymptomatic parents were more abundant than de novo changes in ASD patients (11.43% vs. 5.71%) in contrast to the ID/DD group where de novo mutations prevailed over inherited ones (26.47% vs. 16.18%). ASD patients shared more frequently their mutations with their fathers than patients from ID/DD group (8.57% vs. 1.47%). Maternally inherited mutations predominated in the ID/DD group in comparison with the ASD group (14.7% vs. 2.86 %). CNVs of an unknown significance were found in 10 patients by CMA and in 3 patients by MLPA. Although the detection rate is the highest when using CMA, recurrent CNVs can be easily detected by MLPA. CMA proved to be more efficient in the ID/DD group where a larger spectrum of rare pathogenic CNVs was revealed. This study determined that maternally inherited highly penetrant mutations and de novo mutations more often resulted in ID/DD without ASD in patients. The paternally inherited mutations could be, however, a source of the greater variability in the genome of the ASD patients and contribute to the polygenic character of the inheritance of ASD. As the number of the subjects in the group is limited, a larger cohort is needed to confirm this conclusion. Inherited CNVs have a role in aetiology of ASD possibly in combination with additional genetic factors - the mutations elsewhere in the genome. The identification of these interactions constitutes a challenge for the future. Supported by MH CZ – DRO (FNOl, 00098892), IGA UP LF_2016_010, TACR TE02000058 and NPU LO1304.

Keywords: autistic spectrum disorders, copy number variant, chromosomal microarray, intellectual disability, karyotyping, MLPA, multiplex ligation-dependent probe amplification

Procedia PDF Downloads 331
266 Managing Type 1 Diabetes in College: A Thematic Analysis of Online Narratives Posted on YouTube

Authors: Ekaterina Malova

Abstract:

Type 1 diabetes (T1D) is a chronic illness requiring immense lifestyle changes to reduce the chance of life-threatening complications. Moving to a college may be the first time for a young adult with T1D to take responsibility for all the aspects of their diabetes care. In addition, people with T1D constantly face stigmatization and discrimination as a result of their health condition, which puts additional pressure on young adults with T1D. Hence, omissions in diabetes self-care often occur during the time of transition to college when both the social and physical environment of young adults changes drastically and contribute to the fact that emerging young adults remain one of the age groups with the highest hemoglobin levels and poorest diabetes control. However, despite potential severe health risks caused by a lack of proper diabetes self-care, little is known about the experiences of emerging adults embarking on a higher education journey as this population. Thus, young adults with type 1 diabetes are a 'forgotten group,' meaning that their experiences are rarely addressed by researchers. Given that self-disclosure and information-seeking can be challenging for individuals with stigmatized illnesses, online platforms like YouTube have become a popular medium of self-disclosure and information-seeking for people living with T1D. Thus, this study aims to provide an analysis of experiences that college students with T1D choose to share with the general public online and explore the nature of information being communicated by college students with T1D to the online community in personal narratives posted on YouTube. A systematic approach was used to retrieve a video sample by searching YouTube with keywords 'type 1 diabetes' and 'college,' with results ordered by relevance. A total of 18 videos were saved. Video lengths ranged from 2 to 28 minutes. The data were coded using NVivo. Video transcripts were coded and analyzed utilizing the thematic analysis method. Three key themes emerged from thematic analysis: 1) Advice, 2) Personal experience, and 3) Things I wish everyone knew about T1D. In addition, Theme 1 was divided into subtopics to differentiate between the most common types of advice: 1) Overcoming stigma and b) Seeking social support. The identified themes indicate that two groups of the population can potentially benefit from watching students’ video testimonies: 1) lay public and 2) other students with T1D. Given that students in the videos reported a lack of T1D education in the lay public, such video narratives can serve important educational purposes and reduce health stigma, while perceived similarity and identification with students in the videos may facilitate the transition of health information to other individuals with T1D and positively affect their diabetes routine. Thus, online video narratives can potentially serve both educational and persuasive purposes, empowering students with T1D to stay in control of T1D while succeeding academically.

Keywords: type 1 diabetes, college students, health communication, transition period

Procedia PDF Downloads 133
265 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy

Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai

Abstract:

Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.

Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy

Procedia PDF Downloads 120
264 Academia as Creator of Emerging, Innovative Communities of Practice and Learning

Authors: Francisco Julio Batle Lorente

Abstract:

The present paper aims at presenting a new category of role for academia: proactive creator/promoter of communities of practice in emerging areas of innovation. It is based in research among practitioners in three different areas: social entrepreneurship, alumni engaged in entrepreneurship and innovation, and digital nomads. The concept of CoP is related to an intentionally created space to share experiences and collectively reflect on the cases arising from practice. Such an endeavour is not contemplated in the literature on academic roles in an explicit way. The goal of the paper is providing a framework for this function and throw some light on the perception and priorities of members of emerging communities (78 alumni, 154 social entrepreneurs, and 231 digital nomads) regarding community, learning, engagement, and networking, areas in which the university can help and, by doing so, contributing to signal the emerging area and creating new opportunities for the academia. The research methodology was based in Survey research. It is a specific type of field study that involves the collection of data from a sample of elements drawn from a well-defined population through the use of a questionnaire. It was considered that survey research might be valuable to the present project and help outline the utility of various study designs and future projects with the emerging communities that are the object of the investigation. Open questions were used for different topics, as well as critical incident technique. It was used a standard technique for survey sampling and questionnaire design. Finally, it was defined a procedure for pretesting questionnaires and for data collection. The questionnaire was channelled by means of google forms. The results indicate that the members of emerging, innovative CoPs and learning such the ones that were selected for this investigation lack cohesion, inspiration, networking, opportunities for creation of social capital, opportunities for collaboration beyond their existing and close network. The opportunity that arises for the academia from proactively helping articulate CoP (and Communities of learning) are related to key elements of any CoP/ CoL: community construction approaches, technological infrastructure, benefits, participation issues and urgent challenges, trust, networking, technical ability/training/development and collaboration. Beyond training, other three areas (networking, collaboration and urgent challenges) were the ones in which the contribution of universities to the communities were considered more interesting and workable to practitioners. The analysis of the responses for the open questions related to perception of the universities offer options for terra incognita to be explored for universities (signalling new areas, establishing broader collaborations with research, government, media and corporations, attracting investment). Based on the findings from this research, there is some evidence that CoPs can offer a formal and informal method of professional and interprofessional development for member of any emerging and innovative community and can decrease social and professional isolation. The opportunity that it offers to academia can increase the entrepreneurial and engaged university identity. It also moves to academia into a realm of civic confrontation of present and future challenges in a more proactive way.

Keywords: social innovation, new roles of academia, community of learning, community of practice

Procedia PDF Downloads 62
263 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes

Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert

Abstract:

In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theory

Keywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments

Procedia PDF Downloads 152
262 Spatio-Temporal Dynamic of Woody Vegetation Assessment Using Oblique Landscape Photographs

Authors: V. V. Fomin, A. P. Mikhailovich, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Ground-level landscape photos can be used as a source of objective data on woody vegetation and vegetation dynamics. We proposed a method for processing, analyzing, and presenting ground photographs, which has the following advantages: 1) researcher has to form holistic representation of the study area in form of a set of interlapping ground-level landscape photographs; 2) it is necessary to define or obtain characteristics of the landscape, objects, and phenomena present on the photographs; 3) it is necessary to create new or supplement existing textual descriptions and annotations for the ground-level landscape photographs; 4) single or multiple ground-level landscape photographs can be used to develop specialized geoinformation layers, schematic maps or thematic maps; 5) it is necessary to determine quantitative data that describes both images as a whole, and displayed objects and phenomena, using algorithms for automated image analysis. It is suggested to match each photo with a polygonal geoinformation layer, which is a sector consisting of areas corresponding with parts of the landscape visible in the photos. Calculation of visibility areas is performed in a geoinformation system within a sector using a digital model of a study area relief and visibility analysis functions. Superposition of the visibility sectors corresponding with various camera viewpoints allows matching landscape photos with each other to create a complete and wholesome representation of the space in question. It is suggested to user-defined data or phenomenons on the images with the following superposition over the visibility sector in the form of map symbols. The technology of geoinformation layers’ spatial superposition over the visibility sector creates opportunities for image geotagging using quantitative data obtained from raster or vector layers within the sector with the ability to generate annotations in natural language. The proposed method has proven itself well for relatively open and clearly visible areas with well-defined relief, for example, in mountainous areas in the treeline ecotone. When the polygonal layers of visibility sectors for a large number of different points of photography are topologically superimposed, a layer of visibility of sections of the entire study area is formed, which is displayed in the photographs. Also, as a result of this overlapping of sectors, areas that did not appear in the photo will be assessed as gaps. According to the results of this procedure, it becomes possible to obtain information about the photos that display a specific area and from which points of photography it is visible. This information may be obtained either as a query on the map or as a query for the attribute table of the layer. The method was tested using repeated photos taken from forty camera viewpoints located on Ray-Iz mountain massif (Polar Urals, Russia) from 1960 until 2023. It has been successfully used in combination with other ground-based and remote sensing methods of studying the climate-driven dynamics of woody vegetation in the Polar Urals. Acknowledgment: This research was collaboratively funded by the Russian Ministry for Science and Education project No. FEUG-2023-0002 (image representation) and Russian Science Foundation project No. 24-24-00235 (automated textual description).

Keywords: woody, vegetation, repeated, photographs

Procedia PDF Downloads 30
261 The Role of Emotional Intelligence in the Manager's Psychophysiological Activity during a Performance-Review Discussion

Authors: Mikko Salminen, Niklas Ravaja

Abstract:

Emotional intelligence (EI) consists of skills for monitoring own emotions and emotions of others, skills for discriminating different emotions, and skills for using this information in thinking and actions. EI enhances, for example, work outcomes and organizational climate. We suggest that the role and manifestations of EI should also be studied in real leadership situations, especially during the emotional, social interaction. Leadership is essentially a process to influence others for reaching a certain goal. This influencing happens by managerial processes and computer-mediated communication (e.g. e-mail) but also by face-to-face, where facial expressions have a significant role in conveying emotional information. Persons with high EI are typically perceived more positively, and they have better social skills. We hypothesize, that during social interaction high EI enhances the ability to detect other’s emotional state and controlling own emotional expressions. We suggest, that emotionally intelligent leader’s experience less stress during social leadership situations, since they have better skills in dealing with the related emotional work. Thus the high-EI leaders would be more able to enjoy these situations, but also be more efficient in choosing appropriate expressions for building constructive dialogue. We suggest, that emotionally intelligent leaders show more positive emotional expressions than low-EI leaders. To study these hypotheses we observed performance review discussions of 40 leaders (24 female) with 78 (45 female) of their followers. Each leader held a discussion with two followers. Psychophysiological methods were chosen because they provide objective and continuous data from the whole duration of the discussions. We recorded sweating of the hands (electrodermal activation) by electrodes placed to the fingers of the non-dominant hand to assess the stress-related physiological arousal of the leaders. In addition, facial electromyography was recorded from cheek (zygomaticus major, activated during e.g. smiling) and periocular (orbicularis oculi, activated during smiling) muscles using electrode pairs placed on the left side of the face. Leader’s trait EI was measured with a 360 questionnaire, filled by each leader’s followers, peers, managers and by themselves. High-EI leaders had less sweating of the hands (p = .007) than the low-EI leaders. It is thus suggested that the high-EI leaders experienced less physiological stress during the discussions. Also, high scores in the factor “Using of emotions” were related to more facial muscle activation indicating positive emotional expressions (cheek muscle: p = .048; periocular muscle: p = .076, almost statistically significant). The results imply that emotionally intelligent managers are positively relaxed during s social leadership situations such as a performance review discussion. The current study also highlights the importance of EI in face-to-face social interaction, given the central role facial expressions have in interaction situations. The study also offers new insight to the biological basis of trait EI. It is suggested that the identification, forming, and intelligently using of facial expressions are skills that could be trained during leadership development courses.

Keywords: emotional intelligence, leadership, performance review discussion, psychophysiology, social interaction

Procedia PDF Downloads 229
260 Partnering With Key Stakeholders for Successful Implementation of Inhaled Analgesia for Specific Emergency Department Presentations

Authors: Sarah Hazelwood, Janice Hay

Abstract:

Methoxyflurane is an inhaled analgesic administered via a disposable inhaler, which has been used in Australia for 40 years for the management of pain in children & adults. However, there is a lack of data for methoxyflurane as a frontline analgesic medication within the emergency department (ED). This study will investigate the usefulness of methoxyflurane in a private inner-city ED. The study concluded that the inclusion of all key stakeholders in the prescribing, administering & use of this new process led to comprehensive uptake & vastly positive outcomes for consumer & health professionals. Method: A 12-week prospective pilot study was completed utilizing patients presenting to the ED in pain (numeric pain rating score > 4) that fit the requirement of methoxyflurane use (as outlined in the Australian Prescriber information package). Nurses completed a formatted spreadsheet for each interaction where methoxyflurane was used. Patient demographics, day, time, initial numeric pain score, analgesic response time, the reason for use, staff concern (free text), & patient feedback (free text), & discharge time was documented. When clinical concern was raised, the researcher retrieved & reviewed patient notes. Results: 140 methoxyflurane inhalers were used. 60% of patients were 31 years of age & over (n=82) with 16% aged 70+. The gender split; 51% male: 49% female. Trauma-related pain (57%) saw the highest use of administration, with the evening hours (1500-2259) seeing the greatest numbers used (39%). Tuesday, Thursday & Sunday shared the highest daily use throughout the study. A minimum numerical pain score of 4/10 (n=13, 9%), with the ranges of 5 - 7/10 (moderate pain) being given by almost 50% of patients. Only 3 instances of pain scores increased post use of methoxyflurane (all other entries showed pain score < initial rating). Patients & staff noted obvious analgesic response within 3 minutes (n= 96, 81%, of administration). Nurses documented a change in patient vital signs for 4 of the 15 patient-related concerns; the remaining concerns were due to “gagging” on the taste, or “having a coughing episode”; one patient tried to leave the department before the procedure was attended (very euphoric state). Upon review of the staff concerns – no adverse events occurred & return to therapeutic vitals occurred within 10 minutes. Length of stay for patients was compared with similar presentations (such as dislocated shoulder or ankle fracture) & saw an average 40-minute decrease in time to discharge. Methoxyflurane treatment was rated “positively” by > 80% of patients – with remaining feedback related to mild & transient concerns. Staff similarly noted a positive response to methoxyflurane as an analgesic & as an added tool for frontline analgesic purposes. Conclusion: Methoxyflurane should be used on suitable patient presentations requiring immediate, short term pain relief. As a highly portable, non-narcotic avenue to treat pain this study showed obvious therapeutic benefit, positive feedback, & a shorter length of stay in the ED. By partnering with key stake holders, this study determined methoxyflurane use decreased work load, decreased wait time to analgesia, and increased patient satisfaction.

Keywords: analgesia, benefits, emergency, methoxyflurane

Procedia PDF Downloads 110
259 Shale Gas and Oil Resource Assessment in Middle and Lower Indus Basin of Pakistan

Authors: Amjad Ali Khan, Muhammad Ishaq Saqi, Kashif Ali

Abstract:

The focus of hydrocarbon exploration in Pakistan has been primarily on conventional hydrocarbon resources. Directorate General Petroleum Concessions (DGPC) has taken the lead on the assessment of indigenous unconventional oil and gas resources, which has resulted in a ‘Shale Oil/Gas Resource Assessment Study’ conducted with the help of USAID. This was critically required in the energy-starved Pakistan, where the gap between indigenous oil & gas production and demand continues to widen for a long time. Exploration & exploitation of indigenous unconventional resources of Pakistan have become vital to meet our energy demand and reduction of oil and gas import bill of the country. This study has attempted to bridge a critical gap in geological information about the potential of shale gas & oil in Pakistan in the four formations, i.e., Sembar, Lower Goru, Ranikot and Ghazij in the Middle and Lower Indus Basins, which were selected for the study as for resource assessment for shale gas & oil. The primary objective of the study was to estimate and establish shale oil/gas resource assessment of the study area by carrying out extensive geological analysis of exploration, appraisal and development wells drilled in the Middle and Lower Indus Basins, along with identification of fairway(s) and sweet spots in the study area. The Study covers the Lower parts of the Middle Indus basins located in Sindh, southern Punjab & eastern parts of the Baluchistan provinces, with a total sedimentary area of 271,795 km2. Initially, 1611 wells were reviewed, including 1324 wells drilled through different shale formations. Based on the availability of required technical data, a detailed petrophysical analysis of 124 wells (21 Confidential & 103 in the public domain) has been conducted for the shale gas/oil potential of the above-referred formations. The core & cuttings samples of 32 wells and 33 geochemical reports of prospective Shale Formations were available, which were analyzed to calibrate the results of petrophysical analysis with petrographic/ laboratory analyses to increase the credibility of the Shale Gas Resource assessment. This study has identified the most prospective intervals, mainly in Sembar and Lower Goru Formations, for shale gas/oil exploration in the Middle and Lower Indus Basins of Pakistan. The study recommends seven (07) sweet spots for undertaking pilot projects, which will enable to evaluate of the actual production capability and production sustainability of shale oil/gas reservoirs of Pakistan for formulating future strategies to explore and exploit shale/oil resources of Pakistan including fiscal incentives required for developing shale oil/gas resources of Pakistan. Some E&P Companies are being persuaded to make a consortium for undertaking pilot projects that have shown their willingness to participate in the pilot project at appropriate times. The location for undertaking the pilot project has been finalized as a result of a series of technical sessions by geoscientists of the potential consortium members after the review and evaluation of available studies.

Keywords: conventional resources, petrographic analysis, petrophysical analysis, unconventional resources, shale gas & oil, sweet spots

Procedia PDF Downloads 21
258 Brand Building in Higher Education: A Grounded Theory Investigation of the Impact of the ‘Positive-Visualization-Course in Brand Identity’ upon Freshmen Student's Perception

Authors: Maria Kountouridou, Dino Domic

Abstract:

Within an increasingly competitive and dynamic environment, the higher education sector is becoming more commodified, with the concept of branding to become exceedingly imperative and an inextricable ingredient for the university’s success. Branding in higher education has proven to be an effective strategy that managed to receive considerable attention in the recent few years, and a growing number of articles have begun to appear in the literature. However, a clear void in the literature confirms that the concept of students’ perceptions towards the university’s brand image has not been researched extensively. An investigation on this central concept is of paramount importance since it will facilitate the development of an inductively generated theoretical model concerning branding in higher education. This research focuses on examining the impact of the ‘positive-visualization-course in brand identity’ upon the perception of freshmen students towards a university’s brand image. A grounded theory methodology has been selected, consisting of semi-structured interviews. Forty-two students have participated in the research, among which twenty-five women and seventeen men. The identification of the sample emerged through the use of the snowball sampling technique. The participants were divided into two groups (experimental and control group) after the researcher had taken into consideration the factor ‘program of study’, to eliminate any possible interaction between the participants of each group. An experiment was carried out where a ‘positive-visualization-course in brand identity’ was conducted among the participants of the experimental group, while the participants of the control group have not been exposed to the course. For the purpose of this research, the term ‘positive-visualization-course in brand identity’ refers to a course where brand history, past achievements/recognitions/awards, its values, and its mission are presented. Prior to the course implementation, face-to-face semi-structured interviews were carried out among the participants of both groups, with the aim of examining the freshmen students’ perceptions towards the university’s brand image. One week after the course implementation, the researcher carried out semi-structured interviews with the participants of the experimental group only in order to identify whether students’ perceptions had been affected after the course completion. Four months after the course completion, semi-structured interviews were carried out among the participants of both groups. Eight months after the course completion, semi-structured interviews were conducted with the aim of identifying the freshmen students’ updated perceptions. Data has been analyzed using substantive coding (open and selective coding), theoretical coding, field memos, and constant comparative analysis. The findings strongly suggest that the ‘positive-visualization-course in brand identity’ can positively affect freshmen students’ perceptions towards a university’s brand image. Additionally, other factors conduce to the formation of perception throughout the months. This study contributes and expands upon the existing literature by presenting an inductively generated theoretical model to guide future research in the links between ‘positive-visualization-course in brand identity’ and the perception of freshmen students towards a university’s brand image.

Keywords: brand image, brand name, branding, higher education marketing, perception

Procedia PDF Downloads 158
257 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis

Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos

Abstract:

Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.

Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis

Procedia PDF Downloads 119
256 Development of Infertility Prevention Psycho-Education Program for University Students and Evaluation of Its Effectiveness

Authors: Digdem M. Siyez, Bariscan Ozturk, Erol Esen, Ender Siyez, Yelda Kagnici, Bahar Baran

Abstract:

Infertility is a reproductive disease identified with the absence of pregnancy after regular unprotected sexual intercourse that has been lasting for 12 months or more. Some of the factors that cause infertility, which has been considered as a social and societal issue since the first days of the humankind, are preventable. These are veneral diseases, age, the frequency of the intercourse and its timing, drug use, bodyweight, environmental and professional conditions. Having actual information about the reproductive health is essential to take protective and preventive measures, and it is accepted as the most effective way to reduce the rate of infertility. However, during the literature review, it has been observed that there are so few studies that focus on the prevention of the infertility. The aim of this study is to develop a psycho-education program to reduce infertility among university students and also to evaluate the program’s effectiveness. It is believed that this program will increase the information level about infertility among the university students, help them to adopt healthy attitudes, develop life skills, create awareness about the risk factors and also contribute to the literature. Throughout the study, first, the contents of sexual/reproductive health programs developed for university students were examined by the researches. Besides, “Views about Reproductive Health Psycho-education Program Survey” was developed and applied to 10221 university students from 21 universities. In accordance with the literature and the university students’ views about reproductive health psycho-education program consisting of 9 sessions each of which lasts for 90 minutes was developed. The pilot program was carried out with 16 volunteer undergraduate students attending to a state university. During the evaluation of the pilot study, at the end of each session “Session Evaluation Form” and at the end of the entire program “Program Evaluation Form” were administered to the participants. Besides, one week after the end of the program, a focus group with half of the group, and individual interviews with the rest were conducted. Based on the evaluations, it was determined that the session duration is enough, the teaching methods meet the expectation, the techniques applied are appropriate and clear, and the materials are adequate. Also, an extra session was added to psycho-education program based on the feedbacks of the participants. In order to evaluate program’s effectiveness, Solomon control group design will be used. According to this design, the research has 2 experiment groups and 2 control groups. The participants who voluntarily participated in the research after the announcement of the psycho-education program were divided into experiment and control groups. In the experiment 1 and control 1 groups, “Personal Information Test”, “Infertility Information Test” and “Infertility Attitude Scale”, “Self Identification Inventory” and “Melbourne Decision Scale” were administered as a preliminary test. Currently, at the present stage, psycho-education still continues. After this 10-week program, the same tests will be administered again as the post-tests. The decision upon which statistical method will be applied in the analysis will be made afterwards according to whether the data meets the presuppositions or not.

Keywords: infertility, prevention, psycho-education, reproductive health

Procedia PDF Downloads 203
255 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure

Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek

Abstract:

Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.

Keywords: health system outcomes, health reforms, health system assessment, health system evaluation

Procedia PDF Downloads 267
254 Identification of Electric Energy Storage Acceptance Types: Empirical Findings from the German Manufacturing Industry

Authors: Dominik Halstrup, Marlene Schriever

Abstract:

The industry, as one of the main energy consumer, is of critical importance along the way of transforming the energy system to Renewable Energies. The distributed character of the Energy Transition demands for further flexibility being introduced to the grid. In order to shed further light on the acceptance of Electric Energy Storage (ESS) from an industrial point of view, this study therefore examines the German manufacturing industry. The analysis in this paper uses data composed of a survey amongst 101 manufacturing companies in Germany. Being part of a two-stage research design, both qualitative and quantitative data was collected. Based on a literature review an acceptance concept was developed in the paper and four user-types identified: (Dedicated) User, Impeded User, Forced User and (Dedicated) Non-User and incorporated in the questionnaire. Both descriptive and bivariate analysis is deployed to identify the level of acceptance in the different organizations. After a factor analysis has been conducted, variables were grouped to form independent acceptance factors. Out of the 22 organizations that do show a positive attitude towards ESS, 5 have already implemented ESS and show a positive attitude towards ESS. They can be therefore considered ‘Dedicated Users’. The remaining 17 organizations have a positive attitude but have not implemented ESS yet. The results suggest that profitability plays an important role as well as load-management systems that are already in place. Surprisingly, 2 organizations have implemented ESS even though they have a negative attitude towards it. This is an example for a ‘Forced User’ where reasons of overriding importance or supporters with overriding authority might have forced the company to implement ESS. By far the biggest subset of the sample shows (critical) distance and can therefore be considered ‘(Dedicated) Non-Users’. The results indicate that the majority of the respondents have not thought ESS in their own organization through yet. For the majority of the sample one can therefore not speak of critical distance but rather a distance due to insufficient information and the perceived unprofitability. This paper identifies the relative state of acceptance of ESS in the manufacturing industry as well as current reasons for hindrance and perspectives for future growth of ESS in an industrial setting from a policy level. The interest that is currently generated by the media could be channeled and taken into a more substantial and individual discussion about ESS in an industrial setting. If the current perception of profitability could be addressed and communicated accordingly, ESS and their use in for instance cooperative business models could become a topic for more organizations in Germany and other parts of the world. As price mechanisms tend to favor existing technologies, policy makers need to further access the use of ESS and acknowledge the positive effects when integrated in an energy system. The subfields of generation, transmission and distribution become increasingly intertwined. New technologies and business models, such as ESS or cooperative arrangements entering the market, increase the number of stakeholders. Organizations need to find their place within this array of stakeholders.

Keywords: acceptance, energy storage solutions, German energy transition, manufacturing industry

Procedia PDF Downloads 201
253 Academic Staff Development: A Lever to Address the Challenges of the 21st Century University Classroom

Authors: Severino Machingambi

Abstract:

Most academics entering Higher education as lecturers in South Africa do not have qualifications in Education or teaching. This creates serious problems since they are not sufficiently equipped with pedagogical approaches and theories that inform their facilitation of learning strategies. This, arguably, is one of the reasons why higher education institutions are experiencing high student failure rate. In order to mitigate this problem, it is critical that higher education institutions devise internal academic staff development programmes to capacitate academics with pedagogical skills and competencies so as to enhance the quality of student learning. This paper reported on how the Teaching and Learning Development Centre of a university used design-based research methodology to conceptualise and implement an academic staff development programme for new academics at a university of technology. This approach revolves around the designing, testing and refining of an educational intervention. Design-based research is an important methodology for understanding how, when, and why educational innovations work in practice. The need for a professional development course for academics arose due to the fact that most academics at the university did not have teaching qualifications and many of them were employed straight from industry with little understanding of pedagogical approaches. This paper examines three key aspects of the programme namely, the preliminary phase, the teaching experiment and the retrospective analysis. The preliminary phase is the stage in which the problem identification takes place. The problem that this research sought to address relates to the unsatisfactory academic performance of the majority of the students in the institution. It was therefore hypothesized that the problem could be dealt with by professionalising new academics through engagement in an academic staff development programme. The teaching experiment phase afforded researchers and participants in the programme the opportunity to test and refine the proposed intervention and the design principles upon which it was based. The teaching experiment phase revolved around the testing of the new academics professional development programme. This phase created a platform for researchers and academics in the programme to experiment with various activities and instructional strategies such as case studies, observations, discussions and portfolio building. The teaching experiment phase was followed by the retrospective analysis stage in which the research team looked back and tried to give a trustworthy account of the teaching/learning process that had taken place. A questionnaire and focus group discussions were used to collect data from participants that helped to evaluate the programme and its implementation. One of the findings of this study was that academics joining university really need an academic induction programme that inducts them into the discourse of teaching and learning. The study also revealed that existing academics can be placed on formal study programmes in which they acquire educational qualifications with a view to equip them with useful classroom discourses. The study, therefore, concludes that new and existing academics in universities should be supported through induction programmes and placement on formal studies in teaching and learning so that they are capacitated as facilitators of learning.

Keywords: academic staff, pedagogy, programme, staff development

Procedia PDF Downloads 109
252 Telomerase, a Biomarker in Oral Cancer Cell Proliferation and Tool for Its Prevention at Initial Stage

Authors: Shaista Suhail

Abstract:

As cancer populations is increasing sharply, the incidence of oral squamous cell carcinoma (OSCC) has also been expected to increase. Oral carcinogenesis is a highly complex, multistep process which involves accumulation of genetic alterations that lead to the induction of proteins promoting cell growth (encoded by oncogenes), increased enzymatic (telomerase) activity promoting cancer cell proliferation. The global increase in frequency and mortality, as well as the poor prognosis of oral squamous cell carcinoma, has intensified current research efforts in the field of prevention and early detection of this disease. The advances in the understanding of the molecular basis of oral cancer should help in the identification of new markers. The study of the carcinogenic process of the oral cancer, including continued analysis of new genetic alterations, along with their temporal sequencing during initiation, promotion and progression, will allow us to identify new diagnostic and prognostic factors, which will provide a promising basis for the application of more rational and efficient treatments. Telomerase activity has been readily found in most cancer biopsies, in premalignant lesions or germ cells. Activity of telomerase is generally absent in normal tissues. It is known to be induced upon immortalization or malignant transformation of human cells such as in oral cancer cells. Maintenance of telomeres plays an essential role during transformation of precancer to malignant stage. Mammalian telomeres, a specialized nucleoprotein structures are composed of large conctamers of the guanine-rich sequence 5_-TTAGGG-3_. The roles of telomeres in regulating both stability of genome and replicative immortality seem to contribute in essential ways in cancer initiation and progression. It is concluded that activity of telomerase can be used as a biomarker for diagnosis of malignant oral cancer and a target for inactivation in chemotherapy or gene therapy. Its expression will also prove to be an important diagnostic tool as well as a novel target for cancer therapy. The activation of telomerase may be an important step in tumorgenesis which can be controlled by inactivating its activity during chemotherapy. The expression and activity of telomerase are indispensable for cancer development. There are no drugs which can effect extremely to treat oral cancers. There is a general call for new emerging drugs or methods that are highly effective towards cancer treatment, possess low toxicity, and have a minor environment impact. Some novel natural products also offer opportunities for innovation in drug discovery. Natural compounds isolated from medicinal plants, as rich sources of novel anticancer drugs, have been of increasing interest with some enzyme (telomerase) blockage property. The alarming reports of cancer cases increase the awareness amongst the clinicians and researchers pertaining to investigate newer drug with low toxicity.

Keywords: oral carcinoma, telomere, telomerase, blockage

Procedia PDF Downloads 149
251 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 322
250 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 37
249 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 236
248 The Impact of Shifting Trading Pattern from Long-Haul to Short-Sea to the Car Carriers’ Freight Revenues

Authors: Tianyu Wang, Nikita Karandikar

Abstract:

The uncertainty around cost, safety, and feasibility of the decarbonized shipping fuels has made it increasingly complex for the shipping companies to set pricing strategies and forecast their freight revenues going forward. The increase in the green fuel surcharges will ultimately influence the automobile’s consumer prices. The auto shipping demand (ton-miles) has been gradually shifting from long-haul to short-sea trade over the past years following the relocation of the original equipment manufacturer (OEM) manufacturing to regions such as South America and Southeast Asia. The objective of this paper is twofold: 1) to investigate the car-carriers freight revenue development over the years when the trade pattern is gradually shifting towards short-sea exports 2) to empirically identify the quantitative impact of such trade pattern shifting to mainly freight rate, but also vessel size, fleet size as well as Green House Gas (GHG) emission in Roll on-Roll Off (Ro-Ro) shipping. In this paper, a model of analyzing and forecasting ton-miles and freight revenues for the trade routes of AS-NA (Asia to North America), EU-NA (Europe to North America), and SA-NA (South America to North America) is established by deploying Automatic Identification System (AIS) data and the financial results of a selected car carrier company. More specifically, Wallenius Wilhelmsen Logistics (WALWIL), the Norwegian Ro-Ro carrier listed on Oslo Stock Exchange, is selected as the case study company in this paper. AIS-based ton-mile datasets of WALWIL vessels that are sailing into North America region from three different origins (Asia, Europe, and South America), together with WALWIL’s quarterly freight revenues as reported in trade segments, will be investigated and compared for the past five years (2018-2022). Furthermore, ordinary‐least‐square (OLS) regression is utilized to construct the ton-mile demand and freight revenue forecasting. The determinants of trade pattern shifting, such as import tariffs following the China-US trade war and fuel prices following the 0.1% Emission Control Areas (ECA) zone requirement after IMO2020 will be set as key variable inputs to the machine learning model. The model will be tested on another newly listed Norwegian Car Carrier, Hoegh Autoliner, to forecast its 2022 financial results and to validate the accuracy based on its actual results. GHG emissions on the three routes will be compared and discussed based on a constant emission per mile assumption and voyage distances. Our findings will provide important insights about 1) the trade-off evaluation between revenue reduction and energy saving with the new ton-mile pattern and 2) how the trade flow shifting would influence the future need for the vessel and fleet size.

Keywords: AIS, automobile exports, maritime big data, trade flows

Procedia PDF Downloads 95
247 Agroecology Approaches Towards Sustainable Agriculture and Food System: Reviewing and Exploring Selected Policies and Strategic Documents through an Agroecological Lens

Authors: Dereje Regasa

Abstract:

The global food system is at a crossroads, which requires prompt action to minimize the effects of the crises. Agroecology is gaining prominence due to its contributions to sustainable food systems. To support efforts in mitigating the crises, the Food and Agriculture Organization (FAO) established alternative approaches for sustainable agri-food systems. Agroecological elements and principles were developed to guide and support measures that countries need to achieve the Sustainable Development Goals (SDGs). The SDGs require the systemic integration of practices for a smart intensification or adaptation of traditional or industrial agriculture. As one of the countries working towards SDGs, the agricultural practices in Ethiopia need to be guided by these agroecological elements and principles. Aiming at the identification of challenging aspects of a sustainable agri-food system and the characterization of an enabling environment for agroecology, as well as exploring to what extent the existing policies and strategies support the agroecological transition process, five policy and strategy documents were reviewed. These documents are the Rural Development Policy and Strategy, the Environment Policy, the Biodiversity Policy, and the Soil Strategy of the Ministry of Agriculture (MoA). Using the Agroecology Criteria Tool (ACT), the contents were reviewed, focusing on agroecological requirements and the inclusion of sustainable practices. ACT is designed to support a self-assessment of elements supporting agroecology. For each element, binary values were assigned based on the inclusion of the minimum requirements index and then validated through discussion with the document owners. The results showed that the documents were well below the requirements for an agroecological transition of the agri-food system. The Rural Development Policy and Strategy only suffice to 83% in Human and Social Value. It does not support the transition concerning the other elements. The Biodiversity Policy and Soil Strategy suffice regarding the inclusion of Co-creation and Sharing of knowledge (100%), while the remaining elements were not considered sufficiently. In contrast, the Environment Policy supports the transition with three elements accounting for 100%. These are Resilience, Recycling, and Human and Social Care. However, when the four documents were combined, elements such as Synergies, Diversity, Efficiency, Human and Social value, Responsible governance, and Co-creation and Sharing of knowledge were identified as fully supportive (100%). This showed that the policies and strategies complemented one another to a certain extent. However, the evaluation results call for improvements concerning elements like Culture and food traditions, Circular and solidarity economy, Resilience, Recycling, and Regulation and balance since the majority of the elements were not sufficiently observed. Consequently, guidance for the smart intensification of local practices is needed, as well as traditional knowledge enriched with advanced technologies. Ethiopian agricultural and environmental policies and strategies should provide sufficient support and guidance for the intensification of sustainable practices and should provide a framework for an agroecological transition towards a sustainable agri-food system.

Keywords: agroecology, diversity, recycling, sustainable food system, transition

Procedia PDF Downloads 63
246 Molecular Characterization, Host Plant Resistance and Epidemiology of Bean Common Mosaic Virus Infecting Cowpea (Vigna unguiculata L. Walp)

Authors: N. Manjunatha, K. T. Rangswamy, N. Nagaraju, H. A. Prameela, P. Rudraswamy, M. Krishnareddy

Abstract:

The identification of virus in cowpea especially potyviruses is confusing. Even though there are several studies on viruses causing diseases in cowpea, difficult to distinguish based on symptoms and serological detection. The differentiation of potyviruses considering as a constraint, the present study is initiated for molecular characterization, host plant resistance and epidemiology of the BCMV infecting cowpea. The etiological agent causing cowpea mosaic was identified as Bean Common Mosaic Virus (BCMV) on the basis of RT-PCR and electron microscopy. An approximately 750bp PCR product corresponding to coat protein (CP) region of the virus and the presence of long flexuous filamentous particles measuring about 952 nm in size typical to genus potyvirus were observed under electron microscope. The characterized virus isolate genome had 10054 nucleotides, excluding the 3’ terminal poly (A) tail. Comparison of polyprotein of the virus with other potyviruses showed similar genome organization with 9 cleavage sites resulted in 10 functional proteins. The pairwise sequence comparison of individual genes, P1 showed most divergent, but CP gene was less divergent at nucleotide and amino acid level. A phylogenetic tree constructed based on multiple sequence alignments of the polyprotein nucleotide and amino acid sequences of cowpea BCMV and potyviruses showed virus is closely related to BCMV-HB. Whereas, Soybean variant of china (KJ807806) and NL1 isolate (AY112735) showed 93.8 % (5’UTR) and 94.9 % (3’UTR) homology respectively with other BCMV isolates. This virus transmitted to different leguminous plant species and produced systemic symptoms under greenhouse conditions. Out of 100 cowpea genotypes screened, three genotypes viz., IC 8966, V 5 and IC 202806 showed immune reaction in both field and greenhouse conditions. Single marker analysis (SMA) was revealed out of 4 SSR markers linked to BCMV resistance, M135 marker explains 28.2 % of phenotypic variation (R2) and Polymorphic information content (PIC) value of these markers was ranged from 0.23 to 0.37. The correlation and regression analysis showed rainfall, and minimum temperature had significant negative impact and strong relationship with aphid population, whereas weak correlation was observed with disease incidence. Path coefficient analysis revealed most of the weather parameters exerted their indirect contributions to the aphid population and disease incidence except minimum temperature. This study helps to identify specific gaps in knowledge for researchers who may wish to further analyse the science behind complex interactions between vector-virus and host in relation to the environment. The resistant genotypes identified are could be effectively used in resistance breeding programme.

Keywords: cowpea, epidemiology, genotypes, virus

Procedia PDF Downloads 207
245 Management of the Experts in the Research Evaluation System of the University: Based on National Research University Higher School of Economics Example

Authors: Alena Nesterenko, Svetlana Petrikova

Abstract:

Research evaluation is one of the most important elements of self-regulation and development of researchers as it is impartial and independent process of assessment. The method of expert evaluations as a scientific instrument solving complicated non-formalized problems is firstly a scientifically sound way to conduct the assessment which maximum effectiveness of work at every step and secondly the usage of quantitative methods for evaluation, assessment of expert opinion and collective processing of the results. These two features distinguish the method of expert evaluations from long-known expertise widespread in many areas of knowledge. Different typical problems require different types of expert evaluations methods. Several issues which arise with these methods are experts’ selection, management of assessment procedure, proceeding of the results and remuneration for the experts. To address these issues an on-line system was created with the primary purpose of development of a versatile application for many workgroups with matching approaches to scientific work management. Online documentation assessment and statistics system allows: - To realize within one platform independent activities of different workgroups (e.g. expert officers, managers). - To establish different workspaces for corresponding workgroups where custom users database can be created according to particular needs. - To form for each workgroup required output documents. - To configure information gathering for each workgroup (forms of assessment, tests, inventories). - To create and operate personal databases of remote users. - To set up automatic notification through e-mail. The next stage is development of quantitative and qualitative criteria to form a database of experts. The inventory was made so that the experts may not only submit their personal data, place of work and scientific degree but also keywords according to their expertise, academic interests, ORCID, Researcher ID, SPIN-code RSCI, Scopus AuthorID, knowledge of languages, primary scientific publications. For each project, competition assessments are processed in accordance to ordering party demands in forms of apprised inventories, commentaries (50-250 characters) and overall review (1500 characters) in which expert states the absence of conflict of interest. Evaluation is conducted as follows: as applications are added to database expert officer selects experts, generally, two persons per application. Experts are selected according to the keywords; this method proved to be good unlike the OECD classifier. The last stage: the choice of the experts is approved by the supervisor, the e-mails are sent to the experts with invitation to assess the project. An expert supervisor is controlling experts writing reports for all formalities to be in place (time-frame, propriety, correspondence). If the difference in assessment exceeds four points, the third evaluation is appointed. As the expert finishes work on his expert opinion, system shows contract marked ‘new’, managers commence with the contract and the expert gets e-mail that the contract is formed and ready to be signed. All formalities are concluded and the expert gets remuneration for his work. The specificity of interaction of the examination officer with other experts will be presented in the report.

Keywords: expertise, management of research evaluation, method of expert evaluations, research evaluation

Procedia PDF Downloads 189
244 Prevalence, Antimicrobial Susceptibility Pattern and Public Health Significance for Staphylococcus Aureus of Isolated from Raw Red Meat at Butchery and Abattoir House in Mekelle, Northern Ethiopia

Authors: Haftay Abraha Tadesse

Abstract:

Background: Staphylococcus is a genus of worldwide distributed bacteria correlated to several infectious of different sites in humans and animals. They are among the most important causes of infection that are associated with the consumption of contaminated food. Objective: The objective of this study was to determine the isolates, antimicrobial susceptibility patterns and Public Health Significance of Staphylococcus aureus in raw meat from butchery and abattoir houses of Mekelle, Northern Ethiopia. Methodology: A cross-sectional study was conducted from April to October 2019. Socio-demographic data and Public Health Significance were collected using a predesigned questionnaire. The raw meat samples were collected aseptically in the butchery and abattoir houses and transported using an ice box to Mekelle University, College of Veterinary Sciences, for isolating and identification of Staphylococcus aureus. Antimicrobial susceptibility tests were determined by the disc diffusion method. Data obtained were cleaned and entered into STATA 22.0 and a logistic regression model with odds ratio was calculated to assess the association of risk factors with bacterial contamination. A P-value < 0.05 was considered statistically significant. Results: In the present study, 88 out of 250 (35.2%) were found to be contaminated with Staphylococcus aureus. Among the raw meat specimens, the positivity rate of Staphylococcus aureus was 37.6% (n=47) and (32.8% (n=41), butchery and abattoir houses, respectively. Among the associated risks, factories not using gloves reduces risk was found to (AOR=0.222; 95% CI: 0.104-0.473), Strict Separation b/n clean & dirty (AOR= 1.37; 95% CI: 0.66-2.86) and poor habit of hand washing (AOR=1.08; 95%CI: 0.35 3.35) was found to be statistically significant and have associated with Staphylococcus aureus contamination. All isolates of thirty-seven of Staphylococcus aureus were checked and displayed (100%) sensitive to doxycycline, trimethoprim, gentamicin, sulphamethoxazole, amikacin, CN, Co trimoxazole and nitrofurantoi. Whereas the showed resistance to cefotaxime (100%), ampicillin (87.5%), Penicillin (75%), B (75%), and nalidixic acid (50%) from butchery houses. On the other hand, all isolates of Staphylococcus aureus isolate 100% (n= 10) showed sensitive chloramphenicol, gentamicin and nitrofurantoin, whereas they showed 100% resistance of Penicillin, B, AMX, ceftriaxone, ampicillin and cefotaxime from abattoirs houses. The overall multi-drug resistance pattern for Staphylococcus aureus was 90% and 100% of butchery and abattoir houses, respectively. Conclusion: 35.3% Staphylococcus aureus isolated were recovered from the raw meat samples collected from the butchery and abattoirs houses. More has to be done in the development of hand washing behavior and availability of safe water in the butchery houses to reduce the burden of bacterial contamination. The results of the present finding highlight the need to implement protective measures against the levels of food contamination and alternative drug options. The development of antimicrobial resistance is nearly always a result of repeated therapeutic and/or indiscriminate use of them. Regular antimicrobial sensitivity testing helps to select effective antibiotics and to reduce the problems of drug resistance development towards commonly used antibiotics.

Keywords: abattoir house, AMR, butchery house, S. aureus

Procedia PDF Downloads 62
243 Learning the Most Common Causes of Major Industrial Accidents and Apply Best Practices to Prevent Such Accidents

Authors: Rajender Dahiya

Abstract:

Investigation outcomes of major process incidents have been consistent for decades and validate that the causes and consequences are often identical. The debate remains as we continue to experience similar process incidents even with enormous development of new tools, technologies, industry standards, codes, regulations, and learning processes? The objective of this paper is to investigate the most common causes of major industrial incidents and reveal industry challenges and best practices to prevent such incidents. The author, in his current role, performs audits and inspections of a variety of high-hazard industries in North America, including petroleum refineries, chemicals, petrochemicals, manufacturing, etc. In this paper, he shares real life scenarios, examples, and case studies from high hazards operating facilities including key challenges and best practices. This case study will provide a clear understanding of the importance of near miss incident investigation. The incident was a Safe operating limit excursion. The case describes the deficiencies in management programs, the competency of employees, and the culture of the corporation that includes hazard identification and risk assessment, maintaining the integrity of safety-critical equipment, operating discipline, learning from process safety near misses, process safety competency, process safety culture, audits, and performance measurement. Failure to identify the hazards and manage the risks of highly hazardous materials and processes is one of the primary root-causes of an incident, and failure to learn from past incidents is the leading cause of the recurrence of incidents. Several investigations of major incidents discovered that each showed several warning signs before occurring, and most importantly, all were preventable. The author will discuss why preventable incidents were not prevented and review the mutual causes of learning failures from past major incidents. The leading causes of past incidents are summarized below. Management failure to identify the hazard and/or mitigate the risk of hazardous processes or materials. This process starts early in the project stage and continues throughout the life cycle of the facility. For example, a poorly done hazard study such as HAZID, PHA, or LOPA is one of the leading causes of the failure. If this step is performed correctly, then the next potential cause is. Management failure to maintain the integrity of safety critical systems and equipment. In most of the incidents, mechanical integrity of the critical equipment was not maintained, safety barriers were either bypassed, disabled, or not maintained. The third major cause is Management failure to learn and/or apply learning from the past incidents. There were several precursors before those incidents. These precursors were either ignored altogether or not taken seriously. This paper will conclude by sharing how a well-implemented operating management system, good process safety culture, and competent leaders and staff contributed to managing the risks to prevent major incidents.

Keywords: incident investigation, risk management, loss prevention, process safety, accident prevention

Procedia PDF Downloads 32