Search results for: multiple subordinated modeling
731 Functional Analysis of Variants Implicated in Hearing Loss in a Cohort from Argentina: From Molecular Diagnosis to Pre-Clinical Research
Authors: Paula I. Buonfiglio, Carlos David Bruque, Lucia Salatino, Vanesa Lotersztein, Sebastián Menazzi, Paola Plazas, Ana Belén Elgoyhen, Viviana Dalamón
Abstract:
Hearing loss (HL) is the most prevalent sensorineural disorder affecting about 10% of the global population, with more than half due to genetic causes. About 1 in 500-1000 newborns present congenital HL. Most of the patients are non-syndromic with an autosomal recessive mode of inheritance. To date, more than 100 genes are related to HL. Therefore, the Whole-exome sequencing (WES) technique has become a cost-effective alternative approach for molecular diagnosis. Nevertheless, new challenges arise from the detection of novel variants, in particular missense changes, which can lead to a spectrum of genotype-to-phenotype correlations, which is not always straightforward. In this work, we aimed to identify the genetic causes of HL in isolated and familial cases by designing a multistep approach to analyze target genes related to hearing impairment. Moreover, we performed in silico and in vivo analyses in order to further study the effect of some of the novel variants identified in the hair cell function using the zebrafish model. A total of 650 patients were studied by Sanger Sequencing and Gap-PCR in GJB2 and GJB6 genes, respectively, diagnosing 15.5% of sporadic cases and 36% of familial ones. Overall, 50 different sequence variants were detected. Fifty of the undiagnosed patients with moderate HL were tested for deletions in STRC gene by Multiplex ligation-dependent probe amplification technique (MLPA), leading to 6% of diagnosis. After this initial screening, 50 families were selected to be analyzed by WES, achieving diagnosis in 44% of them. Half of the identified variants were novel. A missense variant in MYO6 gene detected in a family with postlingual HL was selected to be further analyzed. A protein modeling with AlphaFold2 software was performed, proving its pathogenic effect. In order to functionally validate this novel variant, a knockdown phenotype rescue assay in zebrafish was carried out. Injection of wild-type MYO6 mRNA in embryos rescued the phenotype, whereas using the mutant MYO6 mRNA (carrying c.2782C>A variant) had no effect. These results strongly suggest the deleterious effect of this variant on the mobility of stereocilia in zebrafish neuromasts, and hence on the auditory system. In the present work, we demonstrated that our algorithm is suitable for the sequential multigenic approach to HL in our cohort. These results highlight the importance of a combined strategy in order to identify candidate variants as well as the in silico and in vivo studies to analyze and prove their pathogenicity and accomplish a better understanding of the mechanisms underlying the physiopathology of the hearing impairment.Keywords: diagnosis, genetics, hearing loss, in silico analysis, in vivo analysis, WES, zebrafish
Procedia PDF Downloads 95730 Suicide Wrongful Death: Standard of Care Problems Involving the Inaccurate Discernment of Lethal Risk When Focusing on the Elicitation of Suicide Ideation
Authors: Bill D. Geis
Abstract:
Suicide wrongful death forensic cases are the fastest rising tort in mental health law. It is estimated that suicide-related cases have accounted for 15% of U.S. malpractice claims since 2006. Most suicide-related personal injury claims fall into the legal category of “wrongful death.” Though mental health experts may be called on to address a range of forensic questions in wrongful death cases, the central consultation that most experts provide is about the negligence element—specifically, the issue of whether the clinician met the clinical standard of care in assessing, treating, and managing the deceased person’s mental health care. Standards of care, varying from U.S. state to state, are broad and address what a reasonable clinician might do in a similar circumstance. This fact leaves the issue of the suicide standard of care, in each case, up to forensic experts to put forth a reasoned estimate of what the standard of care should have been in the specific case under litigation. Because the general state guidelines for standard of care are broad, forensic experts are readily retained to provide scientific and clinical opinions about whether or not a clinician met the standard of care in their suicide assessment, treatment, and management of the case. In the past and in much of current practice, the assessment of suicide has centered on the elicitation of verbalized suicide ideation. Research in recent years, however, has indicated that the majority of persons who end their lives do not say they are suicidal at their last medical or psychiatric contact. Near-term risk assessment—that goes beyond verbalized suicide ideation—is needed. Our previous research employed structural equation modeling to predict lethal suicide risk--eight negative thought patterns (feeling like a burden on others, hopelessness, self-hatred, etc.) mediated by nine transdiagnostic clinical factors (mental torment, insomnia, substance abuse, PTSD intrusions, etc.) were combined to predict acute lethal suicide risk. This structural equation model, the Lethal Suicide Risk Pattern (LSRP), Acute model, had excellent goodness-of-fit [χ2(df) = 94.25(47)***, CFI = .98, RMSEA = .05, .90CI = .03-.06, p(RMSEA = .05) = .63. AIC = 340.25, ***p < .001.]. A further SEQ analysis was completed for this paper, adding a measure of Acute Suicide Ideation to the previous SEQ. Acceptable prediction model fit was no longer achieved [χ2(df) = 3.571, CFI > .953, RMSEA = .075, .90% CI = .065-.085, AIC = 529.550].This finding suggests that, in this additional study, immediate verbalized suicide ideation information was unhelpful in the assessment of lethal risk. The LSRP and other dynamic, near-term risk models (such as the Acute Suicide Affective Disorder Model and the Suicide Crisis Syndrome Model)—going beyond elicited suicide ideation—need to be incorporated into current clinical suicide assessment training. Without this training, the standard of care for suicide assessment is out of sync with current research—an emerging dilemma for the forensic evaluation of suicide wrongful death cases.Keywords: forensic evaluation, standard of care, suicide, suicide assessment, wrongful death
Procedia PDF Downloads 69729 Caribbean Universities and the Global Educational Market: An Examination of Entrepreneurship and Leadership in an Era of Change
Authors: Paulette Henry
Abstract:
If Caribbean Universities wish to remain sustainable in the global education market they must meet the new demands of the 21st Centuries learners. This means preparing the teaching and learning environment with the human and material and resources so that the University can blossom out into the entrepreneurial University. The entrepreneurial University prepares the learner to become a global citizen, one who is innovative and a critical thinker and has the competencies to create jobs. Entrepreneurship education provides more equitable access to university education building capacity for the local and global economy. The entrepreneurial thinking, the mindset, must therefore be among academic and support staff as well as students. In developing countries where resources are scarce, Universities are grappling with a myriad of financial and non-financial issues. These include increasing costs, Union demands for increased remuneration for staff and reduced subvention from governments which has become the norm. In addition, there is the political pressure against increasing tuition fees and the perceptions on the moral responsibilities of universities in national development. The question is how do small universities carve out their niche, meet both political and consumer demands for a high quality, low lost education, fulfil their development mandate and still remain not only viable but competitive. Themes which are central to this discourse on the transitions necessary for the entrepreneurial university are leadership, governance and staff well-being. This paper therefore presents a case study of a Caribbean University to show how transformational leadership and the change management framework propels change towards an entrepreneurial institution seeking to have a competitive advantage despite its low resourced context. Important to this discourse are the transformational approaches used by the University to prepare staff to move from their traditional psyche to embracing an entrepreneurial mindset whilst equipping students within the same mode to become work ready and creative global citizens. Using the mixed methods approach, opinions were garnered from both members of the University community as well as external stakeholder groups on their perception of the role of the University in the business arena and as a primary stakeholder in national development. One of the critical concepts emanating from the discourse was the need to change the mindset of the those in university governance as well as how national stakeholders engage the university. This paper shows how multiple non-financial factors can contribute to change. A combination of transformational and servant leadership, strengthened institutional structures and developing new ones, rebuilding institutional trust and pride have been among the strategies employed within the change management framework. The university is no longer limited by borders but through international linkages has transcended into a transnational stakeholder.Keywords: competitiveness, context, entrepreneurial, leadership
Procedia PDF Downloads 210728 Improvement of the Traditional Techniques of Artistic Casting through the Development of Open Source 3D Printing Technologies Based on Digital Ultraviolet Light Processing
Authors: Drago Diaz Aleman, Jose Luis Saorin Perez, Cecile Meier, Itahisa Perez Conesa, Jorge De La Torre Cantero
Abstract:
Traditional manufacturing techniques used in artistic contexts compete with highly productive and efficient industrial procedures. The craft techniques and associated business models tend to disappear under the pressure of the appearance of mass-produced products that compete in all niche markets, including those traditionally reserved for the work of art. The surplus value derived from the prestige of the author, the exclusivity of the product or the mastery of the artist, do not seem to be sufficient reasons to preserve this productive model. In the last years, the adoption of open source digital manufacturing technologies in small art workshops can favor their permanence by assuming great advantages such as easy accessibility, low cost, and free modification, adapting to specific needs of each workshop. It is possible to use pieces modeled by computer and made with FDM (Fused Deposition Modeling) 3D printers that use PLA (polylactic acid) in the procedures of artistic casting. Models printed by PLA are limited to approximate minimum sizes of 3 cm, and optimal layer height resolution is 0.1 mm. Due to these limitations, it is not the most suitable technology for artistic casting processes of smaller pieces. An alternative to solve size limitation, are printers from the type (SLS) "selective sintering by laser". And other possibility is a laser hardens, by layers, metal powder and called DMLS (Direct Metal Laser Sintering). However, due to its high cost, it is a technology that is difficult to introduce in small artistic foundries. The low-cost DLP (Digital Light Processing) type printers can offer high resolutions for a reasonable cost (around 0.02 mm on the Z axis and 0.04 mm on the X and Y axes), and can print models with castable resins that allow the subsequent direct artistic casting in precious metals or their adaptation to processes such as electroforming. In this work, the design of a DLP 3D printer is detailed, using backlit LCD screens with ultraviolet light. Its development is totally "open source" and is proposed as a kit made up of electronic components, based on Arduino and easy to access mechanical components in the market. The CAD files of its components can be manufactured in low-cost FDM 3D printers. The result is less than 500 Euros, high resolution and open-design with free access that allows not only its manufacture but also its improvement. In future works, we intend to carry out different comparative analyzes, which allow us to accurately estimate the print quality, as well as the real cost of the artistic works made with it.Keywords: traditional artistic techniques, DLP 3D printer, artistic casting, electroforming
Procedia PDF Downloads 142727 Bioresorbable Medicament-Eluting Grommet Tube for Otitis Media with Effusion
Authors: Chee Wee Gan, Anthony Herr Cheun Ng, Yee Shan Wong, Subbu Venkatraman, Lynne Hsueh Yee Lim
Abstract:
Otitis media with effusion (OME) is the leading cause of hearing loss in children worldwide. Surgery to insert grommet tube into the eardrum is usually indicated for OME unresponsive to antimicrobial therapy. It is the most common surgery for children. However, current commercially available grommet tubes are non-bioresorbable, not drug-treated, with unpredictable duration of retention on the eardrum to ventilate middle ear. Their functionality is impaired when clogged or chronically infected, requiring additional surgery to remove/reinsert grommet tubes. We envisaged that a novel fully bioresorbable grommet tube with sustained antibiotic release technology could address these drawbacks. In this study, drug-loaded bioresorbable poly(L-lactide-co-ε-caprolactone)(PLC) copolymer grommet tubes were fabricated by microinjection moulding technique. In vitro drug release and degradation model of PLC tubes were studied. Antibacterial property was evaluated by incubating PLC tubes with P. aeruginosa broth. Surface morphology was analyzed using scanning electron microscopy. A preliminary animal study was conducted using guinea pigs as an in vivo model to evaluate PLC tubes with and without drug, with commercial Mini Shah grommet tube as comparison. Our in vitro data showed sustained drug release over 3 months. All PLC tubes revealed exponential degradation profiles over time. Modeling predicted loss of tube functionality in water to be approximately 14 weeks and 17 weeks for PLC with and without drug, respectively. Generally, PLC tubes had less bacteria adherence, which were attributed to the much smoother tube surfaces compared to Mini Shah. Antibiotic from PLC tube further made bacteria adherence on surface negligible. They showed neither inflammation nor otorrhea after 18 weeks post-insertion in the eardrums of guinea pigs, but had demonstrated severe degree of bioresorption. Histology confirmed the new PLC tubes were biocompatible. Analyses on the PLC tubes in the eardrums showed bioresorption profiles close to our in vitro degradation models. The bioresorbable antibiotic-loaded grommet tubes showed good predictability in functionality. The smooth surface and sustained release technology reduced the risk of tube infection. Tube functional duration of 18 weeks allowed sufficient ventilation period to treat OME. Our ongoing studies include modifying the surface properties with protein coating, optimizing the drug dosage in the tubes to enhance their performances, evaluating their functional outcome on hearing after full resoption of grommet tube and healing of eardrums, and developing animal model with OME to further validate our in vitro models.Keywords: bioresorbable polymer, drug release, grommet tube, guinea pigs, otitis media with effusion
Procedia PDF Downloads 451726 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 79725 Development of the Family Capacity of Management of Patients with Autism Spectrum Disorder Diagnosis
Authors: Marcio Emilio Dos Santos, Kelly C. F. Dos Santos
Abstract:
Caregivers of patients diagnosed with ASD are subjected to high stress situations due to the complexity and multiple levels of daily activities that require the organization of events, behaviors and socioemotional situations, such as immediate decision making and in public spaces. The cognitive and emotional requirement needed to fulfill this caregiving role exceeds the regular cultural process that adults receive in their process of preparation for conjugal and parental life. Therefore, in many cases, caregivers present a high level of overload, poor capacity to organize and mediate the development process of the child or patient about their care. Aims: Improvement in the cognitive and emotional capacities related to the caregiver function, allowing the reduction of the overload, the feeling of incompetence and the characteristic level of stress, developing a more organized conduct and decision making more oriented towards the objectives and procedural gains necessary for the integral development of the patient with diagnosis of ASD. Method: The study was performed with 20 relatives, randomly selected from a total of 140 patients attended. The family members were submitted to the Wechsler Adult Intelligence Scale III intelligence test and the Family assessment Management Measure (FaMM) questionnaire as a previous evaluation. Therapeutic activity in a small group of family members or caregivers, with weekly frequency, with a minimum workload of two hours, using the Feuerstein Instrumental Enrichment Cognitive Development Program - Feuerstein Instrumental Enrichment for ten months. Reapplication of the previous tests to verify the gains obtained. Results and Discussion: There is a change in the level of caregiver overload, improvement in the results of the Family assessment Management Measure and highlight to the increase of performance in the cognitive aspects related to problem solving, planned behavior and management of behavioral crises. These results lead to the discussion of the need to invest in the integrated care of patients and their caregivers, mainly by enabling cognitively to deal with the complexity of Autism. This goes beyond the simple therapeutic orientation about adjustments in family and school routines. The study showed that when the caregiver improves his/her capacity of management, the results of the treatment are potentiated and there is a reduction of the level of the caregiver's overload. Importantly, the study was performed for only ten months and the number of family members attended in the study (n = 20) needs to be expanded to have statistical strength.Keywords: caregiver overload, cognitive development program ASD caregivers, feuerstein instrumental enrichment, family assessment management measure
Procedia PDF Downloads 130724 Effect of Ageing of Laser-Treated Surfaces on Corrosion Resistance of Fusion-bonded Al Joints
Authors: Rio Hirakawa, Christian Gundlach, Sven Hartwig
Abstract:
Aluminium has been used in a wide range of industrial applications due to its numerous advantages, including excellent specific strength, thermal conductivity, corrosion resistance, workability and recyclability. The automotive industry is increasingly adopting multi-materials, including aluminium in structures and components to improve the mechanical usability and performance of individual components. A common method for assembling dissimilar materials is mechanical joining, but mechanical joining requires multiple manufacturing steps, affects the mechanical properties of the base material and increases the weight due to additional metal parts. Fusion bonding is being used in more and more industries as a way of avoiding the above drawbacks. Infusion bonding, and surface pre-treatment of the base material is essential to ensure the long-life durability of the joint. Laser surface treatment of aluminium has been shown to improve the durability of the joint by forming a passive oxide film and roughening the substrate surface. Infusion bonding, the polymer bonds directly to the metal instead of the adhesive, but the sensitivity to interfacial contamination is higher due to the chemical activity and molecular size of the polymer. Laser-treated surfaces are expected to absorb impurities from the storage atmosphere over time, but the effect of such changes in the treated surface over time on the durability of fusion-bonded joints has not yet been fully investigated. In this paper, the effect of the ageing of laser-treated surfaces of aluminum alloys on the corrosion resistance of fusion-bonded joints is therefore investigated. AlMg3 of 1.5 mm thickness was cut using a water-jet cutting machine, cleaned and degreased with isopropanol and surface pre-treated with a pulsed fiber laser at a wavelength of 1060 nm, maximum power of 70 W and repetition rate of 55 kHz. The aluminum surfaces were then stored in air for various periods of time and their corrosion resistance was assessed by cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). For the aluminum joints, induction heating was employed as the fusion bonding method and single-lap shear specimens were prepared. The corrosion resistance of the joints was assessed by measuring the lap shear strength before and after neutral salt spray. Cross-sectional observations by scanning electron microscopy (SEM) were also carried out to investigate changes in the microstructure of the bonded interface. Finally, the corrosion resistance of the surface and the joint were compared and the differences in the mechanisms of corrosion resistance enhancement between the two were discussed.Keywords: laser surface treatment, pre-treatment, bonding, corrosion, durability, interface, automotive, aluminium alloys, joint, fusion bonding
Procedia PDF Downloads 79723 Growing Pains and Organizational Development in Growing Enterprises: Conceptual Model and Its Empirical Examination
Authors: Maciej Czarnecki
Abstract:
Even though growth is one of the most important strategic objectives for many enterprises, we know relatively little about this phenomenon. This research contributes to broaden our knowledge of managerial consequences of growth. Scales for measuring organizational development and growing pains were developed. Conceptual model of connections among growth, organizational development, growing pains, selected development factors and financial performance were examined. The research process contained literature review, 20 interviews with managers, examination of 12 raters’ opinions, pilot research and 7 point Likert scale questionnaire research on 138 Polish enterprises employing 50-249 people which increased their employment at least by 50% within last three years. Factor analysis, Pearson product-moment correlation coefficient, student’s t-test and chi-squared test were used to develop scales. High Cronbach’s alpha coefficients were obtained. The verification of correlations among the constructs was carried out with factor correlations, multiple regressions and path analysis. When the enterprise grows, it is necessary to implement changes in its structure, management practices etc. (organizational development) to meet challenges of growing complexity. In this paper, organizational development was defined as internal changes aiming to improve the quality of existing or to introduce new elements in the areas of processes, organizational structure and culture, operational and management systems. Thus; H1: Growth has positive effects on organizational development. The main thesis of the research is that if organizational development does not catch up with growing complexity of growing enterprise, growing pains will arise (lower work comfort, conflicts, lack of control etc.). They will exert a negative influence on the financial performance and may result in serious organizational crisis or even bankruptcy. Thus; H2: Growth has positive effects on growing pains, H3: Organizational development has negative effects on growing pains, H4: Growing pains have negative effects on financial performance, H5: Organizational development has positive effects on financial performance. Scholars considered long lists of factors having potential influence on organizational development. The development of comprehensive model taking into account all possible variables may be beyond the capacity of any researcher or even statistical software used. After literature review, it was decided to increase the level of abstraction and to include following constructs in the conceptual model: organizational learning (OL), positive organization (PO) and high performance factors (HPF). H1a/b/c: OL/PO/HPF has positive effect on organizational development, H2a/b/c: OL/PO/HPF has negative effect on growing pains. The results of hypothesis testing: H1: partly supported, H1a/b/c: supported/not supported/supported, H2: not supported, H2a/b/c: not supported/partly supported/not supported, H3: supported, H4: partly supported, H5: supported. The research seems to be of a great value for both scholars and practitioners. It proved that OL and HPO matter for organizational development. Scales for measuring organizational development and growing pains were developed. Its main finding, though, is that organizational development is a good way of improving financial performance.Keywords: organizational development, growth, growing pains, financial performance
Procedia PDF Downloads 220722 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety
Authors: Neeti Nayak, Khalid Duri
Abstract:
Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design
Procedia PDF Downloads 110721 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column
Authors: G. Rajapakse, S. Jayasinghe, A. Fleming
Abstract:
This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter
Procedia PDF Downloads 114720 Enhancing Mental Health Services Through Strategic Planning: The East Tennessee State University Counseling Center’s 2024-2028 Plan
Authors: R. M. Kilonzo, S. Bedingfield, K. Smith, K. Hudgins Smith, K. Couper, R. Ratley, Z. Taylor, A. Engelman, M. Renne
Abstract:
Introduction: The mental health needs of university students continue to evolve, necessitating a strategic approach to service delivery. The East Tennessee State University (ETSU) Counseling Center developed its inaugural Strategic Plan (2024-2028) to enhance student mental health services. The plan focuses on improving access, quality of care, and service visibility, aligning with the university’s mission to support academic success and student well-being. Aim: This strategic plan aims to establish a comprehensive framework for delivering high-quality, evidence-based mental health services to ETSU students, addressing current challenges, and anticipating future needs. Methods: The development of the strategic plan was a collaborative effort involving the Counseling Center’s leadership, staff, with technical support from Doctor of Public Health-community and behavioral health intern. Multiple workshops, online/offline reviews, and stakeholder consultations were held to ensure a robust and inclusive process. A SWOT analysis and stakeholder mapping were conducted to identify strengths, weaknesses, opportunities, and challenges. Key performance indicators (KPIs) were set to measure service utilization, satisfaction, and outcomes. Results: The plan resulted in four strategic priorities: service application, visibility/accessibility, safety and satisfaction, and training programs. Key objectives include expanding counseling services, improving service access through outreach, reducing stigma, and increasing peer support programs. The plan also focuses on continuous quality improvement through data-driven assessments and research initiatives. Immediate outcomes include expanded group therapy, enhanced staff training, and increased mental health literacy across campus. Conclusion and Recommendation: The strategic plan provides a roadmap for addressing the mental health needs of ETSU students, with a clear focus on accessibility, inclusivity, and evidence-based practices. Implementing the plan will strengthen the Counseling Center’s capacity to meet the diverse needs of the student population. To ensure sustainability, it is recommended that the center continuously assess student needs, foster partnerships with university and external stakeholders, and advocate for increased funding to expand services and staff capacity.Keywords: strategic plan, university counseling center, mental health, students
Procedia PDF Downloads 21719 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 254718 Industrial Waste Multi-Metal Ion Exchange
Authors: Thomas S. Abia II
Abstract:
Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese
Procedia PDF Downloads 143717 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation
Authors: Calorine Twebaze, Jesca Balinga
Abstract:
Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches
Procedia PDF Downloads 59716 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations
Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis
Abstract:
Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling
Procedia PDF Downloads 263715 Burnout in the Resident Physician and a Simple Means of Improvement
Authors: Jacob Dangerfield, Jacob Pollard, Jennifer DeCou
Abstract:
Introduction: Burnout, anxiety, and depression are three conditions that are prevalent in medical providers. This is especially the case in the field of anesthesia, which has a high number of providers suffering from burnout and burnout syndrome. A major contributor to this issue is isolation in the workplace, with a perceived lack of peer support as a major risk factor for burnout. Two organizational interventions that can be done to help improve this issue are small group sessions and providing affordable mental health services. Per American College of Graduate Medical Education (ACGME) Guidelines, these affordable mental health services are a requirement of all residency programs, but for a variety of reasons, many residents do not access them. As physicians, we are often not good at asking for help. With this in mind, we hypothesized that carrying out small group resiliency sessions facilitated by Graduate Medical Education (GME) Wellness Counselors would improve both resident peer support as well as the likelihood that a resident will reach out to GME Wellness in a time of need. Methods: We held small group resiliency sessions with the GME Wellness Mental Health Professionals during protected didactic time. These sessions were small groups, including the members of one’s class (i.e., first-year residents on their own), and were facilitated by 1-2 mental health professionals. After these sessions, we surveyed residents who attended using a short Google Forms survey and using a 5-point Likert Scale, asked residents about some outcomes from the session. A “strongly agree” or “agree” was considered a positive response. Results: Results from our survey showed that the resident sessions had multiple positive outcomes. This survey was sent to 29 residents, and we had a 62% response rate. We found out through this survey that these small group sessions had a perceived positive impact on resident personal well-being, increased perceived peer support from classmates, and made residents more likely to reach out to GME Wellness in the future. Perceived positive impact on well-being was found in 83% of resident respondents, improved perceived peer support in 83% of respondents, and 78% of resident respondents stated that this session increased their likelihood of reaching out to mental health professionals. Conclusions: Through this study, we can conclude that our hypothesis was correct in that Small Group Resiliency Sessions that are facilitated by GME Wellness Counselors improve both resident peer support as well as the likelihood a resident reaches out to these mental health professionals in time of need. We believe these findings are very important as they address two important factors that can aid in decreasing a provider’s risk of experiencing burnout. Through this simple means, we believe other residency programs can help the well-being of their residents, and together, we can decrease the number of cases of burnout in anesthesia.Keywords: anesthesiology, burnout, wellness, depression, residents, trainees, mental health
Procedia PDF Downloads 54714 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.Keywords: contaminated sediment, monitoring, recovery, remediation
Procedia PDF Downloads 237713 Exploratory Study on Mediating Role of Commitment-to-Change in Relations between Employee Voice, Employee Involvement and Organizational Change Readiness
Authors: Rohini Sharma, Chandan Kumar Sahoo, Rama Krishna Gupta Potnuru
Abstract:
Strong competitive forces and requirements to achieve efficiency are forcing the organizations to realize the necessity and inevitability of change. What's more, the trend does not appear to be abating. Researchers have estimated that about two thirds of change project fails. Empirical evidences further shows that organizations invest significantly in the planned change but people side is accounted for in a token or instrumental way, which is identified as one of the important reason, why change endeavours fail. However, whatever be the reason for change, organizational change readiness must be gauged prior to the institutionalization of organizational change. Hence, in this study the influence of employee voice and employee involvement on organizational change readiness via commitment-to-change is examined, as it is an area yet to be extensively studied. Also, though a recent study has investigated the interrelationship between leadership, organizational change readiness and commitment to change, our study further examined these constructs in relation with employee voice and employee involvement that plays a consequential role for organizational change readiness. Further, integrated conceptual model weaving varied concepts relating to organizational readiness with focus on commitment to change as mediator was found to be an area, which required more theorizing and empirical validation, and this study rooted in an Indian public sector organization is a step in this direction. Data for the study were collected through a survey among employees of Rourkela Steel Plant (RSP), a unit of Steel Authority of India Limited (SAIL); the first integrated Steel Plant in the public sector in India, for which stratified random sampling method was adopted. The schedule was distributed to around 700 employees, out of which 516 complete responses were obtained. The pre-validated scales were used for the study. All the variables in the study were measured on a five-point Likert scale ranging from “strongly disagree (1)” to “strongly agree (5)”. Structural equation modeling (SEM) using AMOS 22 was used to examine the hypothesized model, which offers a simultaneous test of an entire system of variables in a model. The study results shows that inter-relationship between employee voice and commitment-to-change, employee involvement and commitment-to-change and commitment-to-change and organizational change readiness were significant. To test the mediation hypotheses, Baron and Kenny’s technique was used. Examination of direct and mediated effect of mediators confirmed that commitment-to-change partially mediated the relation between employee involvement and organizational change readiness. Furthermore, study results also affirmed that commitment-to-change does not mediate the relation between employee involvement and organizational change readiness. The empirical exploration therefore establishes that it is important to harness employee’s valuable suggestions regarding change for building organizational change readiness. Regarding employee involvement, it was found that sharing information and involving people in decision-making, leads to a creation of participative climate, which educes employee commitment during change and commitment-to-change further, fosters organizational change readiness.Keywords: commitment-to-change, change management, employee voice, employee involvement, organizational change readiness
Procedia PDF Downloads 328712 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology
Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao
Abstract:
With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.Keywords: optimisation, plate, sensor effectiveness, vibration control
Procedia PDF Downloads 234711 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang
Abstract:
Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI
Procedia PDF Downloads 268710 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 224709 A Numerical Studies for Improving the Performance of Vertical Axis Wind Turbine by a Wind Power Tower
Authors: Soo-Yong Cho, Chong-Hyun Cho, Chae-Whan Rim, Sang-Kyu Choi, Jin-Gyun Kim, Ju-Seok Nam
Abstract:
Recently, vertical axis wind turbines (VAWT) have been widely used to produce electricity even in urban. They have several merits such as low sound noise, easy installation of the generator and simple structure without yaw-control mechanism and so on. However, their blades are operated under the influence of the trailing vortices generated by the preceding blades. This phenomenon deteriorates its output power and makes difficulty predicting correctly its performance. In order to improve the performance of VAWT, wind power towers can be applied. Usually, the wind power tower can be constructed as a multi-story building to increase the frontal area of the wind stream. Hence, multiple sets of the VAWT can be installed within the wind power tower, and they can be operated at high elevation. Many different types of wind power tower can be used in the field. In this study, a wind power tower with circular column shape was applied, and the VAWT was installed at the center of the wind power tower. Seven guide walls were used as a strut between the floors of the wind power tower. These guide walls were utilized not only to increase the wind velocity within the wind power tower but also to adjust the wind direction for making a better working condition on the VAWT. Hence, some important design variables, such as the distance between the wind turbine and the guide wall, the outer diameter of the wind power tower, the direction of the guide wall against the wind direction, should be considered to enhance the output power on the VAWT. A numerical analysis was conducted to find the optimum dimension on design variables by using the computational fluid dynamics (CFD) among many prediction methods. The CFD could be an accurate prediction method compared with the stream-tube methods. In order to obtain the accurate results in the CFD, it needs the transient analysis and the full three-dimensional (3-D) computation. However, this full 3-D CFD could be hard to be a practical tool because it requires huge computation time. Therefore, the reduced computational domain is applied as a practical method. In this study, the computations were conducted in the reduced computational domain and they were compared with the experimental results in the literature. It was examined the mechanism of the difference between the experimental results and the computational results. The computed results showed this computational method could be an effective method in the design methodology using the optimization algorithm. After validation of the numerical method, the CFD on the wind power tower was conducted with the important design variables affecting the performance of VAWT. The results showed that the output power of the VAWT obtained using the wind power tower was increased compared to them obtained without the wind power tower. In addition, they showed that the increased output power on the wind turbine depended greatly on the dimension of the guide wall.Keywords: CFD, performance, VAWT, wind power tower
Procedia PDF Downloads 388708 Insights into Particle Dispersion, Agglomeration and Deposition in Turbulent Channel Flow
Authors: Mohammad Afkhami, Ali Hassanpour, Michael Fairweather
Abstract:
The work described in this paper was undertaken to gain insight into fundamental aspects of turbulent gas-particle flows with relevance to processes employed in a wide range of applications, such as oil and gas flow assurance in pipes, powder dispersion from dry powder inhalers, and particle resuspension in nuclear waste ponds, to name but a few. In particular, the influence of particle interaction and fluid phase behavior in turbulent flow on particle dispersion in a horizontal channel is investigated. The mathematical modeling technique used is based on the large eddy simulation (LES) methodology embodied in the commercial CFD code FLUENT, with flow solutions provided by this approach coupled to a second commercial code, EDEM, based on the discrete element method (DEM) which is used for the prediction of particle motion and interaction. The results generated by LES for the fluid phase have been validated against direct numerical simulations (DNS) for three different channel flows with shear Reynolds numbers, Reτ = 150, 300 and 590. Overall, the LES shows good agreement, with mean velocities and normal and shear stresses matching those of the DNS in both magnitude and position. The research work has focused on the prediction of those conditions favoring particle aggregation and deposition within turbulent flows. Simulations have been carried out to investigate the effects of particle size, density and concentration on particle agglomeration. Furthermore, particles with different surface properties have been simulated in three channel flows with different levels of flow turbulence, achieved by increasing the Reynolds number of the flow. The simulations mimic the conditions of two-phase, fluid-solid flows frequently encountered in domestic, commercial and industrial applications, for example, air conditioning and refrigeration units, heat exchangers, oil and gas suction and pressure lines. The particle size, density, surface energy and volume fractions selected are 45.6, 102 and 150 µm, 250, 1000 and 2159 kg m-3, 50, 500, and 5000 mJ m-2 and 7.84 × 10-6, 2.8 × 10-5, and 1 × 10-4, respectively; such particle properties are associated with particles found in soil, as well as metals and oxides prevalent in turbulent bounded fluid-solid flows due to erosion and corrosion of inner pipe walls. It has been found that the turbulence structure of the flow dominates the motion of the particles, creating particle-particle interactions, with most of these interactions taking place at locations close to the channel walls and in regions of high turbulence where their agglomeration is aided both by the high levels of turbulence and the high concentration of particles. A positive relationship between particle surface energy, concentration, size and density, and agglomeration was observed. Moreover, the results derived for the three Reynolds numbers considered show that the rate of agglomeration is strongly influenced for high surface energy particles by, and increases with, the intensity of the flow turbulence. In contrast, for lower surface energy particles, the rate of agglomeration diminishes with an increase in flow turbulence intensity.Keywords: agglomeration, channel flow, DEM, LES, turbulence
Procedia PDF Downloads 318707 Additive Manufacturing with Ceramic Filler
Authors: Irsa Wolfram, Boruch Lorenz
Abstract:
Innovative solutions with additive manufacturing applying material extrusion for functional parts necessitate innovative filaments with persistent quality. Uniform homogeneity and a consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that is rarely at the disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories that investigate complex material topics and technology science to leverage the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillersofferedfrom the market. Therefore, we introduce a prototypal laboratory methodology scalable to tailoredprimal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. - A desktop single-screw extruder serves as a core device for the experiments. Custom-made filaments encapsulate the ceramic fillers and serve with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder, preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. Itis 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms the steady dispersion of the ceramic particles in the composite filament. - This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it deliversconsistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types beyond and above ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses to create their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.Keywords: additive manufacturing, ceramic composites, complex filament, industrial application
Procedia PDF Downloads 106706 Mathematical Modelling of Bacterial Growth in Products of Animal Origin in Storage and Transport: Effects of Temperature, Use of Bacteriocins and pH Level
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cordova
Abstract:
The pathogen growth in animal source foods is a common problem in the food industry, causing monetary losses due to the spoiling of products or food intoxication outbreaks in the community. In this sense, the quality of the product is reflected by the population of deteriorating agents present in it, which are mainly bacteria. The factors which are likely associated with freshness in animal source foods are temperature and processing, storage, and transport times. However, the level of deterioration of products depends, in turn, on the characteristics of the bacterial population, causing the decomposition or spoiling, such as pH level and toxins. Knowing the growth dynamics of the agents that are involved in product contamination allows the monitoring for more efficient processing. This means better quality and reasonable costs, along with a better estimation of necessary time and temperature intervals for transport and storage in order to preserve product quality. The objective of this project is to design a secondary model that allows measuring the impact on temperature bacterial growth and the competition for pH adequacy and release of bacteriocins in order to describe such phenomenon and, thus, estimate food product half-life with the least possible risk of deterioration or spoiling. In order to achieve this objective, the authors propose an analysis of a three-dimensional ordinary differential which includes; logistic bacterial growth extended by the inhibitory action of bacteriocins including the effect of the medium pH; change in the medium pH levels through an adaptation of the Luedeking-Piret kinetic model; Bacteriocin concentration modeled similarly to pH levels. These three dimensions are being influenced by the temperature at all times. Then, this differential system is expanded, taking into consideration the variable temperature and the concentration of pulsed bacteriocins, which represent characteristics inherent of the modeling, such as transport and storage, as well as the incorporation of substances that inhibit bacterial growth. The main results lead to the fact that temperature changes in an early stage of transport increased the bacterial population significantly more than if it had increased during the final stage. On the other hand, the incorporation of bacteriocins, as in other investigations, proved to be efficient in the short and medium-term since, although the population of bacteria decreased, once the bacteriocins were depleted or degraded over time, the bacteria eventually returned to their regular growth rate. The efficacy of the bacteriocins at low temperatures decreased slightly, which equates with the fact that their natural degradation rate also decreased. In summary, the implementation of the mathematical model allowed the simulation of a set of possible bacteria present in animal based products, along with their properties, in various transport and storage situations, which led us to state that for inhibiting bacterial growth, the optimum is complementary low constant temperatures and the initial use of bacteriocins.Keywords: bacterial growth, bacteriocins, mathematical modelling, temperature
Procedia PDF Downloads 137705 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers
Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy
Abstract:
In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology
Procedia PDF Downloads 103704 Petrology of the Post-Collisional Dolerites, Basalts from the Javakheti Highland, South Georgia
Authors: Bezhan Tutberidze
Abstract:
The Neogene-Quaternary volcanic rocks of the Javakheti Highland are products of post-collisional continental magmatism and are related to divergent and convergent margins of Eurasian-Afroarabian lithospheric plates. The studied area constitutes an integral part of the volcanic province of Central South Georgia. Three cycles of volcanic activity are identified here: 1. Late Miocene-Early Pliocene, 2. Late Pliocene-Early /Middle/ Pleistocene and 3. Late Pleistocene. An intense basic dolerite magmatic activity occurred within the time span of the Late Pliocene and lasted until at least Late /Middle/ Pleistocene. The age of the volcanogenic and volcanogenic-sedimentary formation was dated by geomorphological, paleomagnetic, paleontological and geochronological methods /1.7-1.9 Ma/. The volcanic area of the Javakheti Highland contains multiple dolerite Plateaus: Akhalkalaki, Gomarethi, Dmanisi, and Tsalka. Petrographic observations of these doleritic rocks reveal fairly constant mineralogical composition: olivine / Fo₈₇.₆₋₈₂.₇ /, plagioclase / Ab₂₂.₈ An₇₅.₉ Or₁.₃; Ab₄₅.₀₋₃₂.₃ An₅₂.₉₋₆₂.₃ Or₂.₁₋₅.₄/. The pyroxene is an augite and may exhibit a visible zoning: / Wo 39.7-43.1 En 43.5-45.2 Fs 16.8-11.7/. Opaque minerals /magnetite, titanomagnetite/ is abundant as inclusions within olivine and pyroxene crystals. The texture of dolerites exhibits intergranular, holocrystalline to ophitic to sub ophitic granular. Dolerites are most common vesicular rocks. Vesicles range in shape from spherical to elongated and in size from 0.5 mm to than 1.5-2 cm and makeup about 20-50 % of the volume. The dolerites have been subjected to considerable alteration. The secondary minerals in the geothermal field are: zeolite, calcite, chlorite, aragonite, clay-like mineral /dominated by smectites/ and iddingsite –like mineral; rare quartz and pumpellyite are present. These vesicles are filled by secondary minerals. In the chemistry, dolerites are the calc-alkalic transition to sub-alkaline with a predominance of Na₂O over K₂O. Chemical analyses indicate that dolerites of all plateaus of the Javakheti Highland have similar geochemical compositions, signifying that they were formed from the same magmatic source by crystallization of olivine basalis magma which less differentiated / ⁸⁷Sr \ ⁸⁶Sr 0.703920-0704195/. There is one argument, which is less convincing, according to which the dolerites/basalts of the Javakheti Highland are considered to be an activity of a mantle plume. Unfortunately, there does not exist reliable evidence to prove this. The petrochemical peculiarities and eruption nature of the dolerites of the Javakheti Plateau point against their plume origin. Nevertheless, it is not excluded that they influence the formation of dolerite producing primary basaltic magma.Keywords: calc-alkalic, dolerite, Georgia, Javakheti Highland
Procedia PDF Downloads 272703 Synergistic Effect of Chondroinductive Growth Factors and Synovium-Derived Mesenchymal Stem Cells on Regeneration of Cartilage Defects in Rabbits
Authors: M. Karzhauov, А. Mukhambetova, M. Sarsenova, E. Raimagambetov, V. Ogay
Abstract:
Regeneration of injured articular cartilage remains one of the most difficult and unsolved problems in traumatology and orthopedics. Currently, for the treatment of cartilage defects surgical techniques for stimulation of the regeneration of cartilage in damaged joints such as multiple microperforation, mosaic chondroplasty, abrasion and microfractures is used. However, as shown by clinical practice, they can not provide a full and sustainable recovery of articular hyaline cartilage. In this regard, the current high hopes in the regeneration of cartilage defects reasonably are associated with the use of tissue engineering approaches to restore the structural and functional characteristics of damaged joints using stem cells, growth factors and biopolymers or scaffolds. The purpose of the present study was to investigate the effects of chondroinductive growth factors and synovium-derived mesenchymal stem cells (SD-MSCs) on the regeneration of cartilage defects in rabbits. SD-MSCs were isolated from the synovium membrane of Flemish giant rabbits, and expanded in complete culture medium α-MEM. Rabbit SD-MSCs were characterized by CFU-assay and by their ability to differentiate into osteoblasts, chondrocytes and adipocytes. The effects of growth factors (TGF-β1, BMP-2, BMP-4 and IGF-I) on MSC chondrogenesis were examined in micromass pellet cultures using histological and biochemical analysis. Articular cartilage defect (4mm in diameter) in the intercondylar groove of the patellofemoral joint was performed with a kit for the mosaic chondroplasty. The defect was made until subchondral bone plate. Delivery of SD-MSCs and growth factors was conducted in combination with hyaloronic acid (HA). SD-MSCs, growth factors and control groups were compared macroscopically and histologically at 10, 30, 60 and 90 days aftrer intra-articular injection. Our in vitro comparative study revealed that TGF-β1 and BMP-4 are key chondroinductive factors for both the growth and chondrogenesis of SD-MSCs. The highest effect on MSC chondrogenesis was observed with the synergistic interaction of TGF-β1 and BMP-4. In addition, biochemical analysis of the chondrogenic micromass pellets also revealed that the levels of glycosaminoglycans and DNA after combined treatment with TGF-β1 and BMP-4 was significantly higher in comparison to individual application of these factors. In vivo study showed that for complete regeneration of cartilage defects with intra-articular injection of SD-MSCs with HA takes time 90 days. However, single injection of SD-MSCs in combiantion with TGF-β1, BMP-4 and HA significantly promoted regeneration rate of the cartilage defects in rabbits. In this case, complete regeneration of cartilage defects was observed in 30 days after intra-articular injection. Thus, our in vitro and in vivo study demonstrated that combined application of rabbit SD-MSC with chondroinductive growth factors and HA results in strong synergistic effect on the chondrogenesis significantly enhancing regeneration of the damaged cartilage.Keywords: Mesenchymal stem cells, synovium, chondroinductive factors, TGF-β1, BMP-2, BMP-4, IGF-I
Procedia PDF Downloads 306702 How Does Paradoxical Leadership Enhance Organizational Success?
Authors: Wageeh A. Nafei
Abstract:
This paper explores the role of Paradoxical Leadership (PL) in enhancing Organizational Success (OS) at private hospitals in Egypt. Based on the collected data from employees in private hospitals (doctors, nursing staff, and administrative staff). The researcher has adopted a sampling method to collect data for the study. The appropriate statistical methods, such as Alpha Correlation Coefficient (ACC), Confirmatory Factor Analysis (CFA), and Multiple Regression Analysis (MRA), are used to analyze the data and test the hypotheses. The research has reached a number of results, the most important of which are (1) there is a statistical relationship between the independent variable represented by PL and the dependent variable represented by Organizational Success (OS). The paradoxical leader encourages employees to express their opinions and builds a work environment characterized by flexibility and independence. Also, the paradoxical leader works to support specialized work teams, which leads to the creation of new ideas, on the one hand, and contributes to the achievement of outstanding performance on the other hand. (2) the mentality of the paradoxical leader is flexible and capable of absorbing all suggestions from all employees. Also, the paradoxical leader is interested in enhancing cooperation among them and provides an opportunity to transfer experience and increase knowledge-sharing. Also, the sharing of knowledge creates the necessary diversity that helps the organization to obtain rich external information and enables the organization to deal with a rapidly changing environment. (3) The PL approach helps in facing the paradoxical demands of employees. A paradoxical leader plays an important role in reducing the feeling of instability in the work environment and lack of job security, reducing negative feelings for employees, restoring balance in the work environment, improving the well-being of employees, and increasing the degree of job satisfaction of employees in the organization. The study referred to a number of recommendations, the most important of which are (1) the leaders of the organizations must listen to the views of employees and their needs and move away from the official method of control. The leader should give sufficient freedom to employees to participate in decision-making and maintain enough space among them. The treatment between the leaders and employees must be based on friendliness, (2) the need for organizational leaders to pay attention to sharing knowledge among employees through training courses. The leader should make sure that every information provided by the employee is valuable and useful, which can be used to solve a problem that may face his/her colleagues at work, (3) the need for organizational leaders to pay attention to sharing knowledge among employees through brainstorming sessions. The leader should ensure that employees obtain knowledge from their colleagues and share ideas and information among them. This is in addition to motivating employees to complete their work in a new creative way, which leads to employees’ not feeling bored of repeating the same routine procedures in the organization.Keywords: paradoxical leadership, organizational success, human resourece, management
Procedia PDF Downloads 59