Search results for: code blue response time
1223 Artificial Intelligence and Governance in Relevance to Satellites in Space
Authors: Anwesha Pathak
Abstract:
With the increasing number of satellites and space debris, space traffic management (STM) becomes crucial. AI can aid in STM by predicting and preventing potential collisions, optimizing satellite trajectories, and managing orbital slots. Governance frameworks need to address the integration of AI algorithms in STM to ensure safe and sustainable satellite activities. AI and governance play significant roles in the context of satellite activities in space. Artificial intelligence (AI) technologies, such as machine learning and computer vision, can be utilized to process vast amounts of data received from satellites. AI algorithms can analyse satellite imagery, detect patterns, and extract valuable information for applications like weather forecasting, urban planning, agriculture, disaster management, and environmental monitoring. AI can assist in automating and optimizing satellite operations. Autonomous decision-making systems can be developed using AI to handle routine tasks like orbit control, collision avoidance, and antenna pointing. These systems can improve efficiency, reduce human error, and enable real-time responsiveness in satellite operations. AI technologies can be leveraged to enhance the security of satellite systems. AI algorithms can analyze satellite telemetry data to detect anomalies, identify potential cyber threats, and mitigate vulnerabilities. Governance frameworks should encompass regulations and standards for securing satellite systems against cyberattacks and ensuring data privacy. AI can optimize resource allocation and utilization in satellite constellations. By analyzing user demands, traffic patterns, and satellite performance data, AI algorithms can dynamically adjust the deployment and routing of satellites to maximize coverage and minimize latency. Governance frameworks need to address fair and efficient resource allocation among satellite operators to avoid monopolistic practices. Satellite activities involve multiple countries and organizations. Governance frameworks should encourage international cooperation, information sharing, and standardization to address common challenges, ensure interoperability, and prevent conflicts. AI can facilitate cross-border collaborations by providing data analytics and decision support tools for shared satellite missions and data sharing initiatives. AI and governance are critical aspects of satellite activities in space. They enable efficient and secure operations, ensure responsible and ethical use of AI technologies, and promote international cooperation for the benefit of all stakeholders involved in the satellite industry.Keywords: satellite, space debris, traffic, threats, cyber security.
Procedia PDF Downloads 761222 Real-world Characterization of Treatment Intensified (Add-on to Metformin) Adults with Type 2 Diabetes in Pakistan: A Multi-center Retrospective Study (Converge)
Authors: Muhammad Qamar Masood, Syed Abbas Raza, Umar Yousaf Raja, Imran Hassan, Bilal Afzal, Muhammad Aleem Zahir, Atika Shaheer
Abstract:
Background: Cardiovascular disease (CVD) is a major burden among people with type 2 diabetes (T2D) with 1 in 3 reported to have CVD. Therefore, understanding real-world clinical characteristics and prescribing patterns could help in better care. Objective: The CONVERGE (Cardiovascular Outcomes and Value in the Real world with GLP-1RAs) study characterized demographics and medication usage patterns in T2D intensified (add-on to metformin) overall population. The data were further divided into subgroups {dipeptidyl peptidase-4 inhibitors (DPP-4is), sulfonylureas (SUs), insulins, glucagon-like peptide-1 receptor agonists (GLP-1 RAs) and sodium-glucose cotransporter-2 inhibitors (SGLT-2is)}, according to the latest prescribed antidiabetic agent (ADA) in India/Pakistan/Thailand. Here, we report findings from Pakistan. Methods: A multi-center retrospective study utilized data from medical records between 13-Sep-2008 (post-market approval of GLP-1RAs) and 31-Dec-2017 in adults (≥18-year-old). The data for this study were collected from 05 centers / institutes located in major cities of Pakistan, including Karachi, Lahore, Islamabad, and Multan. These centers included National Hospital, Aga Khan University Hospital, Diabetes Endocrine Clinic Lahore, Shifa International Hospital, Mukhtar A Sheikh Hospital Multan. Data were collected at start of medical record and at 6 or 12-months prior to baseline based on variable type; analyzed descriptively. Results: Overall, 1,010 patients were eligible. At baseline, overall mean age (SD) was 51.6 (11.3) years, T2D duration was 2.4 (2.6) years, HbA1c was 8.3% (1.9) and 35% received ≥1CVD medications in the past 1-year (before baseline). Most frequently prescribed ADAs post-metformin were DPP-4is and SUs (~63%). Only 6.5% received GLP-1RAs and SGLT-2is were not available in Pakistan during the study period. Overall, it took a mean of 4.4 years and 5 years to initiate GLP-1RAs and SGLT-2is, respectively. In comparison to other subgroups, more patients from GLP-1RAs received ≥3 types of ADA (58%), ≥1 CVD medication (64%) and had higher body mass index (37kg/m2). Conclusions: Utilization of GLP-1RAs and SGLT-2is was low, took longer time to initiate and not before trying multiple ADAs. This may be due to lack of evidence for CV benefits for these agents during the study period. The planned phase 2 of the CONVERGE study can provide more insights into utilization and barriers to prescribe GLP-1RAs and SGLT-2is post 2018 in Pakistan.Keywords: type 2 diabetes, GLP-1RA, treatment intensification, cardiovascular disease
Procedia PDF Downloads 601221 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack
Authors: Varun Agarwal
Abstract:
Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images
Procedia PDF Downloads 1301220 In vivo Wound Healing Activity and Phytochemical Screening of the Crude Extract and Various Fractions of Kalanchoe petitiana A. Rich (Crassulaceae) Leaves in Mice
Authors: Awol Mekonnen, Temesgen Sidamo, Epherm Engdawork, Kaleab Asresb
Abstract:
Ethnopharmacological Relevance: The leaves of Kalanchoe petitiana A. Rich (Crassulaceae) are used in Ethiopian folk medicine for treatment of evil eye, fractured surface for bone setting and several skin disorders including for the treatment of sores, boils, and malignant wounds. Aim of the Study: In order to scientifically prove the claimed utilization of the plant, the effects of the extracts and the fractions were investigated using in vivo excision, incision and dead space wound models. Materials and Method: Mice were used for wound healing study, while rats and rabbit were used for skin irritation test. For studying healing activity, 80% methanolic extract and the fractions were formulated in strength of 5% and 10%, either as ointment (hydroalcoholic extract, aqueous and methanol fractions) or gel (chloroform fraction). Oral administration of the crude extract was used for dead space model. Negative controls were treated either with simple ointment or sodium carboxyl methyl cellulose xerogel, while positive controls were treated with nitrofurazone (0.2 w/v) skin ointment. Negative controls for dead space model were treated with 1% carboxy methyl cellulose. Parameters, including rate of wound contraction, period of complete epithelializtion, hydroxyproline contents and skin breaking strength were evaluated. Results: Significant wound healing activity was observed with ointment formulated from the crude extract at both 5% and 10% concentration (p<0.01) compared to controls in both excision and incision models. In dead space model, 600 mg/kg (p<0.01), but not 300 mg/kg, significantly increased hydroxyproline content. Fractions showed variable effect, with the chloroform fraction lacking any significant effect. Both 5% and 10% formulations of the aqueous and methanolic fractions significantly increased wound contraction, decreased epithelializtion time and increased hydroxyproline content in excision wound model (p<0.05) as compared to controls. These fractions were also endowed with higher skin breaking strength in incision wound model (p<0.01). Conclusions: The present study provided evidence that the leaves of Kalanchoe petitiana A. Rich possess remarkable wound healing activities supporting the folkloric assertion of the plant. Fractionation revealed that polar or semi-polar compound may play vital role, as both aqueous and methanolic fractions were endowed with wound healing activity.Keywords: wound healing, Kalanchoae petitiana, excision wound, incision wound, dead space model
Procedia PDF Downloads 3091219 Formation of the Water Assisted Supramolecular Assembly in the Transition Structure of Organocatalytic Asymmetric Aldol Reaction: A DFT Study
Authors: Kuheli Chakrabarty, Animesh Ghosh, Atanu Roy, Gourab Kanti Das
Abstract:
Aldol reaction is an important class of carbon-carbon bond forming reactions. One of the popular ways to impose asymmetry in aldol reaction is the introduction of chiral auxiliary that binds the approaching reactants and create dissymmetry in the reaction environment, which finally evolves to enantiomeric excess in the aldol products. The last decade witnesses the usage of natural amino acids as chiral auxiliary to control the stereoselectivity in various carbon-carbon bond forming processes. In this context, L-proline was found to be an effective organocatalyst in asymmetric aldol additions. In last few decades the use of water as solvent or co-solvent in asymmetric organocatalytic reaction is increased sharply. Simple amino acids like L-proline does not catalyze asymmetric aldol reaction in aqueous medium not only that, In organic solvent medium high catalytic loading (~30 mol%) is required to achieve moderate to high asymmetric induction. In this context, huge efforts have been made to modify L-proline and 4-hydroxy-L-proline to prepare organocatalyst for aqueous medium asymmetric aldol reaction. Here, we report the result of our DFT calculations on asymmetric aldol reaction of benzaldehyde, p-NO2 benzaldehyde and t-butyraldehyde with a number of ketones using L-proline hydrazide as organocatalyst in wet solvent free condition. Gaussian 09 program package and Gauss View program were used for the present work. Geometry optimizations were performed using B3LYP hybrid functional and 6-31G(d,p) basis set. Transition structures were confirmed by hessian calculation and IRC calculation. As the reactions were carried out in solvent free condition, No solvent effect were studied theoretically. Present study has revealed for the first time, the direct involvement of two water molecules in the aldol transition structures. In the TS, the enamine and the aldehyde is connected through hydrogen bonding by the assistance of two intervening water molecules forming a supramolecular network. Formation of this type of supramolecular assembly is possible due to the presence of protonated -NH2 group in the L-proline hydrazide moiety, which is responsible for the favorable entropy contribution to the aldol reaction. It is also revealed from the present study that, water assisted TS is energetically more favorable than the TS without involving any water molecule. It can be concluded from this study that, insertion of polar group capable of hydrogen bond formation in the L-proline skeleton can lead to a favorable aldol reaction with significantly high enantiomeric excess in wet solvent free condition by reducing the activation barrier of this reaction.Keywords: aldol reaction, DFT, organocatalysis, transition structure
Procedia PDF Downloads 4341218 Utilization of Process Mapping Tool to Enhance Production Drilling in Underground Metal Mining Operations
Authors: Sidharth Talan, Sanjay Kumar Sharma, Eoin Joseph Wallace, Nikita Agrawal
Abstract:
Underground mining is at the core of rapidly evolving metals and minerals sector due to the increasing mineral consumption globally. Even though the surface mines are still more abundant on earth, the scales of industry are slowly tipping towards underground mining due to rising depth and complexities of orebodies. Thus, the efficient and productive functioning of underground operations depends significantly on the synchronized performance of key elements such as operating site, mining equipment, manpower and mine services. Production drilling is the process of conducting long hole drilling for the purpose of charging and blasting these holes for the production of ore in underground metal mines. Thus, production drilling is the crucial segment in the underground metal mining value chain. This paper presents the process mapping tool to evaluate the production drilling process in the underground metal mining operation by dividing the given process into three segments namely Input, Process and Output. The three segments are further segregated into factors and sub-factors. As per the study, the major input factors crucial for the efficient functioning of production drilling process are power, drilling water, geotechnical support of the drilling site, skilled drilling operators, services installation crew, oils and drill accessories for drilling machine, survey markings at drill site, proper housekeeping, regular maintenance of drill machine, suitable transportation for reaching the drilling site and finally proper ventilation. The major outputs for the production drilling process are ore, waste as a result of dilution, timely reporting and investigation of unsafe practices, optimized process time and finally well fragmented blasted material within specifications set by the mining company. The paper also exhibits the drilling loss matrix, which is utilized to appraise the loss in planned production meters per day in a mine on account of availability loss in the machine due to breakdowns, underutilization of the machine and productivity loss in the machine measured in drilling meters per unit of percussion hour with respect to its planned productivity for the day. The given three losses would be essential to detect the bottlenecks in the process map of production drilling operation so as to instigate the action plan to suppress or prevent the causes leading to the operational performance deficiency. The given tool is beneficial to mine management to focus on the critical factors negatively impacting the production drilling operation and design necessary operational and maintenance strategies to mitigate them.Keywords: process map, drilling loss matrix, SIPOC, productivity, percussion rate
Procedia PDF Downloads 2151217 Study on the Rapid Start-up and Functional Microorganisms of the Coupled Process of Short-range Nitrification and Anammox in Landfill Leachate Treatment
Authors: Lina Wu
Abstract:
The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and poses a threat to water quality. Nitrogen pollution control has become a global concern. Currently, the problem of water pollution in China is still not optimistic. As a typical high ammonia nitrogen organic wastewater, landfill leachate is more difficult to treat than domestic sewage because of its complex water quality, high toxicity, and high concentration.Many studies have shown that the autotrophic anammox bacteria in nature can combine nitrous and ammonia nitrogen without carbon source through functional genes to achieve total nitrogen removal, which is very suitable for the removal of nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The process composed of short-range nitrification and denitrification coupled an ammo ensures the removal of total nitrogen and improves the removal efficiency, meeting the needs of the society for an ecologically friendly and cost-effective nutrient removal treatment technology. Continuous flow process for treating late leachate [an up-flow anaerobic sludge blanket reactor (UASB), anoxic/oxic (A/O)–anaerobic ammonia oxidation reactor (ANAOR or anammox reactor)] has been developed to achieve autotrophic deep nitrogen removal. In this process, the optimal process parameters such as hydraulic retention time and nitrification flow rate have been obtained, and have been applied to the rapid start-up and stable operation of the process system and high removal efficiency. Besides, finding the characteristics of microbial community during the start-up of anammox process system and analyzing its microbial ecological mechanism provide a basis for the enrichment of anammox microbial community under high environmental stress. One research developed partial nitrification-Anammox (PN/A) using an internal circulation (IC) system and a biological aerated filter (BAF) biofilm reactor (IBBR), where the amount of water treated is closer to that of landfill leachate. However, new high-throughput sequencing technology is still required to be utilized to analyze the changes of microbial diversity of this system, related functional genera and functional genes under optimal conditions, providing theoretical and further practical basis for the engineering application of novel anammox system in biogas slurry treatment and resource utilization.Keywords: nutrient removal and recovery, leachate, anammox, partial nitrification
Procedia PDF Downloads 511216 Knowledge Management Processes as a Driver of Knowledge-Worker Performance in Public Health Sector of Pakistan
Authors: Shahid Razzaq
Abstract:
The governments around the globe have started taking into considerations the knowledge management dynamics while formulating, implementing, and evaluating the strategies, with or without the conscious realization, for the different public sector organizations and public policy developments. Health Department of Punjab province in Pakistan is striving to deliver quality healthcare services to the community through an efficient and effective service delivery system. Despite of this struggle some employee performance issues yet exists in the form of challenge to government. To overcome these issues department took several steps including HR strategies, use of technologies and focus of hard issues. Consequently, this study was attempted to highlight the importance of soft issue that is knowledge management in its true essence to tackle their performance issues. Knowledge management in public sector is quite an ignored area in the knowledge management-a growing multidisciplinary research discipline. Knowledge-based view of the firm theory asserts the knowledge is the most deliberate resource that can result in competitive advantage for an organization over the other competing organizations. In the context of our study it means for gaining employee performance, organizations have to increase the heterogeneous knowledge bases. The study uses the cross-sectional and quantitative research design. The data is collected from the knowledge workers of Health Department of Punjab, the biggest province of Pakistan. A total of 341 sample size is achieved. The SmartPLS 3 Version 2.6 is used for analyzing the data. The data examination revealed that knowledge management processes has a strong impact on knowledge worker performance. All hypotheses are accepted according to the results. Therefore, it can be summed up that to increase the employee performance knowledge management activities should be implemented. Health Department within province of Punjab introduces the knowledge management infrastructure and systems to make effective availability of knowledge for the service staff. This knowledge management infrastructure resulted in an increase in the knowledge management process in different remote hospitals, basic health units and care centers which resulted in greater service provisions to public. This study is to have theoretical and practical significances. In terms of theoretical contribution, this study is to establish the relationship between knowledge management and performance for the first time. In case of the practical contribution, this study is to give an insight to public sector organizations and government about role of knowledge management in employ performance. Therefore, public policymakers are strongly advised to implement the activities of knowledge management for enhancing the performance of knowledge workers. The current research validated the substantial role of knowledge management in persuading and creating employee arrogances and behavioral objectives. To the best of authors’ knowledge, this study contribute to the impact of knowledge management on employee performance as its originality.Keywords: employee performance, knowledge management, public sector, soft issues
Procedia PDF Downloads 1411215 The Anti-Globalization Movement, Brexit, Outsourcing and the Current State of Globalization
Authors: Alexis Naranjo
Abstract:
In the current global stage, a new sense and mix feelings against the globalization has started to take shape thanks to events such as Brexit and the 2016 US election. The perceptions towards the globalization have started to focus in a resistance movement called the 'anti-globalization movement'. This paper examines the current global stage vs. leadership decisions in a time when market integrations are not longer seeing as an opportunity for an economic growth buster. The biggest economy in the world the United States of America has started to face a new beginning of something called 'anti-globalization', in the current global stage starting with the United Kingdom to the United States a new strategy to help local economies has started to emerge. A new nationalist movement has started to focus on their local economies which now represents a direct threat to the globalization, trade agreements, wages and free markets. Business leaders of multinationals now in our days face a new dilemma, how to address the feeling that globalization and outsourcing destroy and take away jobs from local economies. The initial perception of the literature and data rebels that companies in Western countries like the US sees many risks associate with outsourcing, however, saving cost associated with outsourcing is greater than the firm’s local reputation. Starting with India as a good example of a supplier of IT developers, analysts and call centers we can start saying that India is an industrialized nation which has not yet secured its spot and title. India has emerged as a powerhouse in the outsource industry, which makes India hold the number one spot in the world to outsource IT services. Thanks to the globalization of economies and markets around the globe that new ideas to increase productivity at a lower cost has been existing for years and has started to offer new ideas and options to businesses in different industries. The economic growth of the information technology (IT) industry in India is an example of the power of the globalization which in the case of India has been tremendous and significant especially in the economic arena. This research paper concentrates in understand the behavior of business leaders: First, how multinational’s leaders will face the new challenges and what actions help them to lead in turbulent times. Second, if outsourcing or withdraw from a market is an option what are the consequences and how you communicate and negotiate from the business leader perspective. Finally, is the perception of leaders focusing on financial results or they have a different goal? To answer these questions, this study focuses on the most recent data available to outline and present the findings of the reason why outsourcing is and option and second, how and why those decisions are made. This research also explores the perception of the phenomenon of outsourcing in many ways and explores how the globalization has contributed to its own questioning.Keywords: anti-globalization, globalization, leadership, outsourcing
Procedia PDF Downloads 1941214 Executive Leadership in Kinesiology, Exercise and Sport Science: The Five 'C' Concept
Authors: Jim Weese
Abstract:
The Kinesiology, Exercise and Sport Science environment remain excellent venues for leadership research. Prescribed leadership (coaching), emergent leadership (players and organizations), and executive leadership are all popular themes in the research literature. Leadership remains a popular area of inquiry in the sport management domain as well as an interesting area for practitioners who wish to heighten their leadership practices and effectiveness. The need for effective leadership in these areas given competing demands for attention and resources may be at an all-time high. The presenter has extensive research and practical experience in the area and has developed his concept based on the latest leadership literature. He refers to this as the Five ’C’s of Leadership. These components, noted below, have been empirically validated and have served as the foundation for extensive consulting with academic, sport, and business leaders. Credibility (C1) is considered the foundation of leadership. There are two components to this area, namely: (a) leaders being respected for having the relevant knowledge, insights, and experience to be seen as credible sources of information, and (b) followers perceiving the leader as being a person of character, someone who is honest, reliable, consistent, and trustworthy. Compelling Vision (C2) refers to the leader’s ability to focus the attention of followers on a desired end goal. Effective leaders understand trends and developments in their industry. They also listen attentively to the needs and desires of their stakeholders and use their own instincts and experience to shape these ideas into an inspiring vision that is effectively and continuously communicated. Charismatic Communicator (C3) refers to the leader’s ability to formally and informally communicate with members. Leaders must deploy mechanisms and communication techniques to keep their members informed and engaged. Effective leaders sprinkle in ‘proof points’ that reinforce the vision’s relevance and/or the unit’s progress towards its attainment. Contagious Enthusiasm (C4) draws on the emotional intelligence literature as it relates to exciting and inspiring followers. Effective leaders demonstrate a level of care, commitment, and passion for their people and feelings of engagement permeate the group. These leaders genuinely care about the task at hand, and for the people working to make it a reality. Culture Builder (C5) is the capstone component of the model and is critical to long-term success and survival. Organizational culture refers to the dominant beliefs, values and attitudes of members of a group or organization. Some have suggested that developing and/or imbedding a desired culture for an organization is the most important responsibility for a leader. The author outlines his Five ‘C’s’ of Leadership concept and provide direct application to executive leadership in Kinesiology, Exercise and Sport Science.Keywords: effectiveness, leadership, management, sport
Procedia PDF Downloads 3001213 The Use of Political Savviness in Dealing with Workplace Ostracism: A Social Information Processing Perspective
Authors: Amy Y. Wang, Eko L. Yi
Abstract:
Can vicarious experiences of workplace ostracism affect employees’ willingness to voice? Given the increasingly interdependent nature of the modern workplace in which employees rely on social interactions to fulfill organizational goals, workplace ostracism –the extent to which an individual perceives that he or she is ignored or excluded by others in the workplace– has garnered significant interest from scholars and practitioners alike. Extending beyond conventional studies that largely focus on the perspectives and outcomes of ostracized targets, we address the indirect effects of workplace ostracism on third-party employees embedded in the same social context. Using a social information processing approach, we propose that the ostracism of coworkers acts as political information that influences third-party employees in their decisions to engage in risky and discretionary behaviors such as employee voice. To make sense of and to navigate through experiences of workplace ostracism, we posit that both political understanding and political skill allow third party employees to minimize the risks and uncertainty of voicing. This conceptual model was tested by a study involving 154 supervisor-subordinate dyads of a publicly listed bio-technology firm located in Mainland China. Each supervisor and their direct subordinates composed of a work team; each team had a minimum of two subordinates and a maximum of four subordinates. Human resources used the master list to distribute the ID coded questionnaires to the matching names. All studied constructs were measured using existing scales proved effective in previous literature. Hypotheses were tested using Confirmatory Factor Analysis and Hierarchal Multiple Regression. All three hypotheses were supported which showed that employees were less likely to engage in voice behaviors when their coworkers reported having experienced ostracism in the workplace. Results also showed a significant three-way interaction between political understanding and political skill on the relationship between coworkers’ ostracism and employee voice, indicating that political savviness is a valuable resource in mitigating ostracism’s negative and indirect effects. Our results illustrated that an employee’s coworkers being ostracized indeed adversely impacted his or her own voice behavior. However, not all individuals reacted passively to the social context; rather, we found that politically savvy individuals – possessing both political understanding and political skill – and their voice behaviors were less impacted by ostracism in their work environment. At the same time, we found that having only political understanding or only political skill was significantly less effective in mitigating ostracism’s negative effects, suggesting a necessary duality of political knowledge and political skill in combatting ostracism. Organizational implications, recommendations, and future research ideas are also discussed.Keywords: employee voice, organizational politics, social information processing, workplace ostracism
Procedia PDF Downloads 1401212 Between Leader-Member Exchange and Toxic Leadership: A Theoretical Review
Authors: Aldila Dyas Nurfitri
Abstract:
Nowadays, leadership has became the one of main issues in forming organization groups even countries. The concept of a social contract between the leaders and subordinates become one of the explanations for the leadership process. The interests of the two parties are not always the same, but they must work together to achieve both goals. Based on the concept at the previous it comes “The Leader Member Exchange Theory”—well known as LMX Theory, which assumes that leadership is a process of social interaction interplay between the leaders and their subordinates. High-quality LMX relationships characterized by a high carrying capacity, informal supervision, confidence, and power negotiation enabled, whereas low-quality LMX relationships are described by low support, large formal supervision, less or no participation of subordinates in decision-making, and less confidence as well as the attention of the leader Application of formal supervision system in a low LMX behavior was in line with strict controls on toxic leadership model. Leaders must be able to feel toxic control all aspects of the organization every time. Leaders with this leadership model does not give autonomy to the staff. This behavior causes stagnation and make a resistant organizational culture in an organization. In Indonesia, the pattern of toxic leadership later evolved into a dysfunctional system that is growing rapidly. One consequence is the emergence of corrupt behavior. According to Kellerman, corruption is defined as a pattern and some subordinates behave lie, cheat or steal to a degree that goes beyond the norm, they put self-interest than the common good.According to the corruption data in Indonesia based on the results of ICW research on 2012 showed that the local government sector ranked first with 177 cases. Followed by state or local enterprises as much as 41 cases. LMX is defined as the quality of the relationship between superiors and subordinates are implications for the effectiveness and progress of the organization. The assumption of this theory that leadership as a process of social interaction interplay between the leaders and his followers are characterized by a number of dimensions, such as affection, loyalty, contribution, and professional respect. Meanwhile, the toxic leadership is dysfunctional leadership in organization that is led by someone with the traits are not able to adjust, do not have integrity, malevolent, evil, and full of discontent marked by a number of characteristics, such as self-centeredness, exploiting others, controlling behavior, disrespecting others, suppress innovation and creativity of employees, and inadequate emotional intelligence. The leaders with some characteristics, such as high self-centeredness, exploiting others, controlling behavior, and disrespecting others, tends to describe a low LMX relationships directly with subordinates compared with low self-centeredness, exploiting others, controlling behavior, and disrespecting others. While suppress innovation and creativity of employees aspect and inadequate emotional intelligence, tend not to give direct effect to the low quality of LMX.Keywords: leader-member exchange, toxic leadership, leadership
Procedia PDF Downloads 4871211 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 1141210 Glasshouse Experiment to Improve Phytomanagement Solutions for Cu-Polluted Mine Soils
Authors: Marc Romero-Estonllo, Judith Ramos-Castro, Yaiza San Miguel, Beatriz Rodríguez-Garrido, Carmela Monterroso
Abstract:
Mining activity is among the main sources of trace and heavy metal(loid) pollution worldwide, which is a hazard to human and environmental health. That is why several projects have been emerging for the remediation of such polluted places. Phytomanagement strategies draw good performances besides big side benefits. In this work, a glasshouse assay with trace element polluted soils from an old Cu mine ore (NW of Spain) which forms part of the PhytoSUDOE network of phytomanaged contaminated field sites (PhytoSUDOE Project (SOE1/P5/E0189)) was set. The objective was to evaluate improvements induced by the following phytoremediation-related treatments. Three increasingly complex amendments alone or together with plant growth (Populus nigra L. alone and together with Tripholium repens L.) were tested. And three different rhizosphere bioinocula were applied (Plant Growth Promoting Bacteria (PGP), mycorrhiza (MYC), or mixed (PGP+MYC)). After 110 days of growth, plants were collected, biomass was weighed, and tree length was measured. Physical-chemical analyses were carried out to determine pH, effective Cation Exchange Capacity, carbon and nitrogen contents, bioavailable phosphorous (Olsen bicarbonate method), pseudo total element content (microwave acid digested fraction), EDTA extractable metals (complexed fraction), and NH4NO3 extractable metals (easily bioavailable fraction). On plant material, nitrogen content and acid digestion elements were determined. Amendment usage, plant growth, and bioinoculation were demonstrated to improve soil fertility and/or plant health within the time span of this study. Particularly, pH levels increased from 3 (highly acidic) to 5 (acidic) in the worst-case scenario, even reaching 7 (neutrality) in the best plots. Organic matter and pH increments were related to polluting metals’ bioavailability decrements. Plants grew better both with the most complex amendment and the middle one, with few differences due to bioinoculation. Using the less complex amendment (just compost) beneficial effects of bioinoculants were more observable, although plants didn’t thrive very well. On unamended soils, plants neither sprouted nor bloomed. The scheme assayed in this study is suitable for phytomanagement of these kinds of soils affected by mining activity. These findings should be tested now on a larger scale.Keywords: aided phytoremediation, mine pollution, phytostabilization, soil pollution, trace elements
Procedia PDF Downloads 661209 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain
Procedia PDF Downloads 3131208 Pond Site Diagnosis: Monoclonal Antibody-Based Farmer Level Tests to Detect the Acute Hepatopancreatic Necrosis Disease in Shrimp
Authors: B. T. Naveen Kumar, Anuj Tyagi, Niraj Kumar Singh, Visanu Boonyawiwat, A. H. Shanthanagouda, Orawan Boodde, K. M. Shankar, Prakash Patil, Shubhkaramjeet Kaur
Abstract:
Early mortality syndrome (EMS)/Acute Hepatopancreatic Necrosis Disease (AHPND) has emerged as a major obstacle for the shrimp farming around the world. It is caused by a strain of Vibrio parahaemolyticus. The possible preventive and control measure is, early and rapid detection of the pathogen in the broodstock, post-larvae and monitoring the shrimp during the culture period. Polymerase chain reaction (PCR) based early detection methods are good, but they are costly, time taking and requires a sophisticated laboratory. The present study was conducted to develop a simple, sensitive and rapid diagnostic farmer level kit for the reliable detection of AHPND in shrimp. A panel of monoclonal antibodies (MAbs) were raised against the recombinant Pir B protein (rPirB). First, an immunodot was developed by using MAbs G3B8 and Mab G3H2 which showed specific reactivity to purified r-PirB protein with no cross-reactivity to other shrimp bacterial pathogens (AHPND free Vibrio parahaemolyticus (Indian strains), V. anguillarum, WSSV, Aeromonas hydrophila, and Aphanomyces invadans). Immunodot developed using Mab G3B8 is more sensitive than that with the Mab G3H2. However, immunodot takes almost 2.5 hours to complete with several hands-on steps. Therefore, the flow-through assay (FTA) was developed by using a plastic cassette containing the nitrocellulose membrane with absorbing pads below. The sample was dotted in the test zone on the nitrocellulose membrane followed by continuos addition of five solutions in the order of i) blocking buffer (BSA) ii) primary antibody (MAb) iii) washing Solution iv) secondary antibody and v) chromogen substrate (TMB) clear purple dots against a white background were considered as positive reactions. The FTA developed using MAbG3B8 is more sensitive than that with MAb G3H2. In FTA the two MAbs showed specific reactivity to purified r-PirB protein and not to other shrimp bacterial pathogens. The FTA is simple to farmer/field level, sensitive and rapid requiring only 8-10 min for completion. Tests can be developed to kits, which will be ideal for use in biosecurity, for the first line of screening (at the port or pond site) and during monitoring and surveillance programmes overall for the good management practices to reduce the risk of the disease.Keywords: acute hepatopancreatic necrosis disease, AHPND, flow-through assay, FTA, farmer level, immunodot, pond site, shrimp
Procedia PDF Downloads 1741207 The Attitudes of Senior High School Students Toward Work Immersion Programs of Nazareth School of National University
Authors: Kim Katherine Castillo, Nelson John Datubatang, Terrence Phillip Dy, Norelie Hampac, Reichen Crismark Martinez, Nina Faith Pantinople, Jose Dante Santos II, Marchel Ann Santos, Sophia Abigail Santiago, Zyrill Xsar San Juan, Aira Mae Tagao, Crystal Kylla Viagedor
Abstract:
The Work Immersion Program was implemented to help students gain abundant work-related experiences while on-site; additionally, the program aims to help students improve their competencies and interpersonal skills as they are given the option to join the workforce if they ever choose to do so after senior high school. The work immersion experience posed diverse challenges for students, spanning personal, financial, engagement, environmental, and equipment-related domains. These included the need for assistance in time management, transportation expenses, and procurement of materials. Furthermore, students faced difficulties in independent task completion and encountered suboptimal work environments. Addressing these multifaceted obstacles is crucial to optimize the educational outcomes of work immersion programs. In addition to the challenges, several other issues have been identified, including the absence of standardized work immersion programs across schools and industries, the challenges in securing appropriate work immersion placements, the necessity for enhanced monitoring and evaluation of program effectiveness, and the limited availability of field programs aligned with students' chosen courses. Furthermore, there is a lack of comprehensive information regarding the attitudes of Senior High School students toward work immersion programs within their respective schools. This study aims to investigate the attitudes of senior high school students at Nazareth School of National University towards work immersion programs, with a focus on identifying factors that influence their perception and participation, including collegiality and expectations. By exploring the students' attitudes, the research endeavors to enhance the school's work immersion programs and contribute to the overall educational experience of the students. This study addresses challenges related to work immersion programs, focusing on six subtopics: Work Immersion, Work Immersion in the Philippines, Students' Attitudes, Factors Affecting Students' Attitudes, Effectiveness of Work Immersion for Senior High School Students, and Students' Perception and Willingness to Participate. Using a descriptive research design, the study examines the attitudes of senior high school students at Nazareth School of National University in Manila. Data was collected from 100 students, representing different academic strands, through a 35-item researcher-made survey. Descriptive statistics, including measures of central tendency and variability, will be used to analyze the data using JASP, providing valuable insights into the students' attitudes toward work immersion.Keywords: attitudes, challenges, educational outcomes, work immersion programs
Procedia PDF Downloads 951206 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme
Authors: Eleanor Nel
Abstract:
Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.Keywords: evaluation, framework, impact, research output
Procedia PDF Downloads 761205 The Effect of Different Strength Training Methods on Muscle Strength, Body Composition and Factors Affecting Endurance Performance
Authors: Shaher A. I. Shalfawi, Fredrik Hviding, Bjornar Kjellstadli
Abstract:
The main purpose of this study was to measure the effect of two different strength training methods on muscle strength, muscle mass, fat mass and endurance factors. Fourteen physical education students accepted to participate in this study. The participants were then randomly divided into three groups, traditional training group (TTG), cluster training group (CTG) and control group (CG). TTG consisted of 4 participants aged ( ± SD) (22.3 ± 1.5 years), body mass (79.2 ± 15.4 kg) and height (178.3 ± 11.9 cm). CTG consisted of 5 participants aged (22.2 ± 3.5 years), body mass (81.0 ± 24.0 kg) and height (180.2 ± 12.3 cm). CG consisted of 5 participants aged (22 ± 2.8 years), body mass (77 ± 19 kg) and height (174 ± 6.7 cm). The participants underwent a hypertrophy strength training program twice a week consisting of 4 sets of 10 reps at 70% of one-repetition maximum (1RM), using barbell squat and barbell bench press for 8 weeks. The CTG performed 2 x 5 reps using 10 s recovery in between repetitions and 50 s recovery between sets, while TTG performed 4 sets of 10 reps with 90 s recovery in between sets. Pre- and post-tests were administrated to assess body composition (weight, muscle mass, and fat mass), 1RM (bench press and barbell squat) and a laboratory endurance test (Bruce Protocol). Instruments used to collect the data were Tanita BC-601 scale (Tanita, Illinois, USA), Woodway treadmill (Woodway, Wisconsin, USA) and Vyntus CPX breath-to-breath system (Jaeger, Hoechberg, Germany). Analysis was conducted at all measured variables including time to peak VO2, peak VO2, heart rate (HR) at peak VO2, respiratory exchange ratio (RER) at peak VO2, and number of breaths per minute. The results indicate an increase in 1RM performance after 8 weeks of training. The change in 1RM squat was for the TTG = 30 ± 3.8 kg, CTG = 28.6 ± 8.3 kg and CG = 10.3 ± 13.8 kg. Similarly, the change in 1RM bench press was for the TTG = 9.8 ± 2.8 kg, CTG = 7.4 ± 3.4 kg and CG = 4.4 ± 3.4 kg. The within-group analysis from the oxygen consumption measured during the incremental exercise indicated that the TTG had only a statistical significant increase in their RER from 1.16 ± 0.04 to 1.23 ± 0.05 (P < 0.05). The CTG had a statistical significant improvement in their HR at peak VO2 from 186 ± 24 to 191 ± 12 Beats Per Minute (P < 0.05) and their RER at peak VO2 from 1.11 ± 0.06 to 1.18 ±0.05 (P < 0.05). Finally, the CG had only a statistical significant increase in their RER at peak VO2 from 1.11 ± 0.07 to 1.21 ± 0.05 (P < 0.05). The between-group analysis showed no statistical differences between all groups in all the measured variables from the oxygen consumption test during the incremental exercise including changes in muscle mass, fat mass, and weight (kg). The results indicate a similar effect of hypertrophy strength training irrespective of the methods of the training used on untrained subjects. Because there were no notable changes in body-composition measures, the results suggest that the improvements in performance observed in all groups is most probably due to neuro-muscular adaptation to training.Keywords: hypertrophy strength training, cluster set, Bruce protocol, peak VO2
Procedia PDF Downloads 2501204 Understanding Feminization of Indian Agriculture and the Dynamics of Intrahousehold Bargaining Power at a Household Level
Authors: Arpit Sachan, Nilanshu Kumar
Abstract:
This paper tries to understand the nuances of feminisation of agriculture in the Indian context and how that is associated with better intrahousehold bargaining power for women. The economic survey of India indicates a constant increase in the share of the female workforce in Indian agriculture in the past few decades. This can be accounted for by many factors like the migration of male workers to urban areas and, therefore, the complete burden of agriculture shifting on the female counterparts. Therefore this study is an attempt to study that how this increase in the female workforce corresponds to a better decision-making ability for women in rural farm households. This paper is an attempt to carefully evaluate this aspect of the feminisation of Indian agriculture. The paper tries to study how various factors that improve the status of women in agriculture change with things like resource ownership. This paper uses both the macro-level and micro-level data to study the dynamics of the proportion of the workforce in agriculture across different states in India and how that has translated into better indicators for women in rural areas. The fall in India’s rank in the global gender wage gap index is alarming in such a context, and this creates a puzzle with increasing female workforce participation. The paper will consider if the condition of women improved over time with the increased share of employment or not? Using field survey data, this paper tries to understand if there exists any digression for some of the indicators both at the macro and micro level. The paper also tries to integrate the economic understanding of gender aspects of the workforce and the sociological stance prevailing in the existing literature. Therefore, this paper takes a mixed-method approach to better understand the role that social structure plays in the improved status of women within and across various households. Therefore, this paper will finally help us understanding if at all there is a feminisation of Indian agriculture or it's just exploitation of a different kind. This study intends to create a distinction between the gendered labour force in Indian agriculture and the complete democratization of Indian agriculture. The study is primarily focused on areas where the exodus of male migrants pushes women to work on agricultural farms. The question posits is whether it is the willingness of women to work in agriculture or is it urbanisation and development-induced conditions that make women work in agriculture as farm labourers? The motive is to understand if factors like resource ownership and the ability to autonomous decision-making are interlinked with an increased proportion of the female workforce or not? Based on this framework, we finally provide a brief comment on policy implications of government intervention in improving Indian agriculture and the gender aspects associated with it.Keywords: feminisation, intrahousehold bargaining, farm households, migration, agriculture, decision-making
Procedia PDF Downloads 1301203 Roadway Infrastructure and Bus Safety
Authors: Richard J. Hanowski, Rebecca L. Hammond
Abstract:
Very few studies have been conducted to investigate safety issues associated with motorcoach/bus operations. The current study investigates the impact that roadway infrastructure, including locality, roadway grade, traffic flow and traffic density, have on bus safety. A naturalistic driving study was conducted in the U.S.A that involved 43 motorcoaches. Two fleets participated in the study and over 600,000 miles of naturalistic driving data were collected. Sixty-five bus drivers participated in this study; 48 male and 17 female. The average age of the drivers was 49 years. A sophisticated data acquisition system (DAS) was installed on each of the 43 motorcoaches and a variety of kinematic and video data were continuously recorded. The data were analyzed by identifying safety critical events (SCEs), which included crashes, near-crashes, crash-relevant conflicts, and unintentional lane deviations. Additionally, baseline (normative driving) segments were also identified and analyzed for comparison to the SCEs. This presentation highlights the need for bus safety research and the methods used in this data collection effort. With respect to elements of roadway infrastructure, this study highlights the methods used to assess locality, roadway grade, traffic flow, and traffic density. Locality was determined by manual review of the recorded video for each event and baseline and was characterized in terms of open country, residential, business/industrial, church, playground, school, urban, airport, interstate, and other. Roadway grade was similarly determined through video review and characterized in terms of level, grade up, grade down, hillcrest, and dip. The video was also used to make a determination of the traffic flow and traffic density at the time of the event or baseline segment. For traffic flow, video was used to assess which of the following best characterized the event or baseline: not divided (2-way traffic), not divided (center 2-way left turn lane), divided (median or barrier), one-way traffic, or no lanes. In terms of traffic density, level-of-service categories were used: A1, A2, B, C, D, E, and F. Highlighted in this abstract are only a few of the many roadway elements that were coded in this study. Other elements included lighting levels, weather conditions, roadway surface conditions, relation to junction, and roadway alignment. Note that a key component of this study was to assess the impact that driver distraction and fatigue have on bus operations. In this regard, once the roadway elements had been coded, the primary research questions that were addressed were (i) “What environmental condition are associated with driver choice of engagement in tasks?”, and (ii) “what are the odds of being in a SCE while engaging in tasks while encountering these conditions?”. The study may be of interest to researchers and traffic engineers that are interested in the relationship between roadway infrastructure elements and safety events in motorcoach bus operations.Keywords: bus safety, motorcoach, naturalistic driving, roadway infrastructure
Procedia PDF Downloads 1801202 Analyzing the Construction of Collective Memories by History Movies/TV Programs: Case Study of Masters in the Forbidden City
Authors: Lulu Wang, Yongjun Xu, Xiaoyang Qiao
Abstract:
The Forbidden City is well known for being full of Chinese cultural and historical relics. However, the Masters in the Forbidden City, a documentary film, doesn’t just dwell on the stories of the past. Instead, it focuses on ordinary people—the restorers of the relics and antiquities, which has caught the sight of Chinese audiences. From this popular documentary film, a new way can be considered, that is to show the relics, antiquities and painting with a character of modern humanities by films and TV programs. Of course, it can’t just like a simple explanation from tour guides in museums. It should be a perfect combination of scenes, heritages, stories, storytellers and background music. All we want to do is trying to dig up the humanity behind the heritages and then create a virtual scene for the audience to have emotional resonance from the humanity. It is believed that there are two problems. One is that compared with the entertainment shows, why people prefer to see the boring restoration work. The other is that what the interaction is between those history documentary films, the heritages, the audiences and collective memory. This paper mainly used the methods of text analysis and data analysis. The audiences’ comment texts were collected from all kinds of popular video sites. Through analyzing those texts, there was a word cloud chart about people preferring to use what kind of words to comment the film. Then the usage rate of all comments words was calculated. After that, there was a Radar Chart to show the rank results. Eventually, each of them was given an emotional value classification according their comment tone and content. Based on the above analysis results, an interaction model among the audience, history films/TV programs and the collective memory can be summarized. According to the word cloud chart, people prefer to use such words to comment, including moving, history, love, family, celebrity, tone... From those emotional words, we can see Chinese audience felt so proud and shared the sense of Collective Identity, so they leave such comments: To our great motherland! Chinese traditional culture is really profound! It is found that in the construction of collective memory symbology, the films formed an imaginary system by organizing a ‘personalized audience’. The audience is not just a recipient of information, but a participant of the documentary films and a cooperator of collective memory. At the same time, it is believed that the traditional background music, the spectacular present scenes and the tone of the storytellers/hosts are also important, so it is suggested that the museums could try to cooperate with the producers of movie and TV program to create a vivid scene for the people. Maybe it’s a more artistic way for heritages to be open to all the world.Keywords: audience, heritages, history movies, TV programs
Procedia PDF Downloads 1611201 Effects of Different Fungicide In-Crop Treatments on Plant Health Status of Sunflower (Helianthus annuus L.)
Authors: F. Pal-Fam, S. Keszthelyi
Abstract:
Phytosanitary condition of sunflower (Helianthus annuus L.) was endangered by several phytopathogenic agents, mainly microfungi, such as Sclerotinia sclerotiorum, Diaporthe helianthi, Plasmopara halstedtii, Macrophomina phaseolina and so on. There are more agrotechnical and chemical technologies against them, for instance, tolerant hybrids, crop rotations and eventually several in-crop chemical treatments. There are different fungicide treatment methods in sunflower in Hungarian agricultural practice in the quest of obtaining healthy and economic plant products. Besides, there are many choices of useable active ingredients in Hungarian sunflower protection. This study carried out into the examination of the effect of five different fungicide active substances (found on the market) and three different application modes (early; late; and early and late treatments) in a total number of 9 sample plots, 0.1 ha each other. Five successive vegetation periods have been investigated in long term, between 2013 and 2017. The treatments were: 1)untreated control; 2) boscalid and dimoxystrobin late treatment (July); 3) boscalid and dimoxystrobin early treatment (June); 4) picoxystrobin and cyproconazole early treatment; 5) picoxystrobin and cymoxanil and famoxadone early treatment; 6) picoxystrobin and cyproconazole early; cymoxanil and famoxadone late treatments; 7) picoxystrobin and cyproconazole early; picoxystrobin and cymoxanil and famoxadone late treatments; 8) trifloxystrobin and cyproconazole early treatment; and 9) trifloxystrobin and cyproconazole both early and late treatments. Due to the very different yearly weather conditions different phytopathogenic fungi were dominant in the particular years: Diaporthe and Alternaria in 2013; Alternaria and Sclerotinia in 2014 and 2015; Alternaria, Sclerotinia and Diaporthe in 2016; and Alternaria in 2017. As a result of treatments ‘infection frequency’ and ‘infestation rate’ showed a significant decrease compared to the control plot. There were no significant differences between the efficacies of the different fungicide mixes; all were almost the same effective against the phytopathogenic fungi. The most dangerous Sclerotinia infection was practically eliminated in all of the treatments. Among the single treatments, the late treatment realised in July was the less efficient, followed by the early treatments effectuated in June. The most efficient was the double treatments realised in both June and July, resulting 70-80% decrease of the infection frequency, respectively 75-90% decrease of the infestation rate, comparing with the control plot in the particular years. The lowest yield quantity was observed in the control plot, followed by the late single treatment. The yield of the early single treatments was higher, while the double treatments showed the highest yield quantities (18.3-22.5% higher than the control plot in particular years). In total, according to our five years investigation, the most effective application mode is the double in-crop treatment per vegetation time, which is reflected by the yield surplus.Keywords: fungicides, treatments, phytopathogens, sunflower
Procedia PDF Downloads 1411200 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets
Authors: Debjit Ray
Abstract:
Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.Keywords: genomics, pathogens, genome assembly, superbugs
Procedia PDF Downloads 1971199 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves
Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada
Abstract:
Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine
Procedia PDF Downloads 1621198 Prevalence of Work-Related Musculoskeletal Disorder among Dental Personnel in Perak
Authors: Nursyafiq Ali Shibramulisi, Nor Farah Fauzi, Nur Azniza Zawin Anuar, Nurul Atikah Azmi, Janice Hew Pei Fang
Abstract:
Background: Work related musculoskeletal disorders (WRMD) among dental personnel have been underestimated and under-reported worldwide and specifically in Malaysia. The problem will arise and progress slowly over time, as it results from accumulated injury throughout the period of work. Several risk factors, such as repetitive movement, static posture, vibration, and adapting poor working postures, have been identified to be contributing to WRMSD in dental practices. Dental personnel is at higher risk of getting this problem as it is their working nature and core business. This would cause pain and dysfunction syndrome among them and result in absence from work and substandard services to their patients. Methodology: A cross-sectional study involving 19 government dental clinics in Perak was done over the period of 3 months. Those who met the criteria were selected to participate in this study. Malay version of the Self-Reported Nordic Musculoskeletal Discomfort Form was used to identify the prevalence of WRMSD, while the intensity of pain in the respective regions was evaluated using a 10-point scale according to ‘Pain as The 5ᵗʰ Vital Sign’ by MOH Malaysia and later on were analyzed using SPSS version 25. Descriptive statistics, including mean and SD and median and IQR, were used for numerical data. Categorical data were described by percentage. Pearson’s Chi-Square Test and Spearman’s Correlation were used to find the association between the prevalence of WRMSD and other socio-demographic data. Results: 159 dentists, 73 dental therapists, 26 dental lab technicians, 81 dental surgery assistants, and 23 dental attendants participated in this study. The mean age for the participants was 34.9±7.4 and their mean years of service was 9.97±7.5. Most of them were female (78.5%), Malay (71.3%), married (69.6%) and right-handed (90.1%). The highest prevalence of WRMSD was neck (58.0%), followed by shoulder (48.1%), upper back (42.0%), lower back (40.6%), hand/wrist (31.5%), feet (21.3%), knee (12.2%), thigh 7.7%) and lastly elbow (6.9%). Most of those who reported having neck pain scaled their pain experiences at 2 out of 10 (19.5%), while for those who suffered upper back discomfort, most of them scaled their pain experience at 6 out of 10 (17.8%). It was found that there was a significant relationship between age and pain at neck (p=0.007), elbow (p=0.027), lower back (p=0.032), thigh (p=0.039), knee (p=0.001) and feet (p=0.000) regions. Job position also had been found to be having a significant relationship with pain experienced at the lower back (p=0.018), thigh (p=0.011), knee, and feet (p=0.000). Conclusion: The prevalence of WRMSD among dental personnel in Perak was found to be high. Age and job position were found to be having a significant relationship with pain experienced in several regions. Intervention programs should be planned and conducted to prevent and reduce the occurrence of WRMSD, as all harmful or unergonomic practices should be avoided at all costs.Keywords: WRMSD, ergonomic, dentistry, dental
Procedia PDF Downloads 881197 Global Production of Systematic Reviews on Population Health Issues in the Middle East and North Africa: Preliminary Results of a Systematic Overview and Bibliometric Analysis, 2008-2016
Authors: Karima Chaabna, Sohaila Cheema, Amit Abraham, Hekmat Alrouh, Ravinder Mamtani, Javaid I. Sheikh
Abstract:
We aimed to assess the production of systematic reviews (SRs) that synthesize observational studies discussing population health issues in the Middle East and North Africa (MENA). Two independent reviewers systematically searched MEDLINE through PubMed. Between 2008-2016, 5,747 articles (reviews, systematic reviews, and meta-analyses) were identified. Following a multi-stage screening process, 387 SRs (with or without meta-analysis) on population health issues in the MENA were included in our overview. Citation numbers for each SR were retrieved from Google Scholar. Impact factor of the journal during the publication year for the included SRs was retrieved from the Institute of Scientific Information’s Journal Citation Report. We conducted linear regression analysis to assess time trends of number of publications according to SRs’ characteristics. We characterized a linear statistically significant increase in the annual numbers of SRs that summarize observational studies on the MENA population health (p-value<0.0001, R2=0.95), from 15 in 2008 to 81 in 2016. Our analysis reveals also linear statistically significant increases in numbers of SRs published by authors affiliated to institutions located inside MENA and/or neighboring countries (N=113, p-value < 0.0001, R²=0.90), by authors located outside MENA (N=155, p-value=0.0007, R²=0.82), and by collaborating authors affiliated to institutions located outside MENA and inside the region and/or in MENA’s neighboring countries (total number of SRs (N)= 119, p-value=0.0004, R²=0.85). Furthermore, these SRs were published in journals with an IF ranging from 0 to 47.8 (median=2.1). Linear statistically significant increases in numbers of published SRs were demonstrated in journals’ impact factor (IF) categories (IF=[0-2[: R²=0.79, p-value=0.0012; IF=[2-4[:R²=0.86, p-value=0.0003; and IF=[4-6[:R²=0.53, p-value=0.026). Additionally, annual numbers of citations to the SRs varied between 0 and 471 (median=7). While each year, a couple of SRs were getting more than 50 annual citations, there were linear statistically significant increases in numbers of published SRs with an annual number of citations at [0-10[(R²=0.89, p-value=0.00014) and at [10-50[ (R²=0.76, p-value=0.0021). Between 2008-2016, increasingly SRs that summarize observational studies on population health issues in the MENA were published. Authors of these SRs were located inside and/or outside the MENA region and an increasing number of collaborations were seen. Increasing numbers of SRs were predominantly observed in journals with an IF between zero and six. Interestingly, SRs covering MENA region countries were being increasingly cited, indicating an escalation of interest in this region’s population health issues.Keywords: bibliometric, citation, impact factor, Middle East and North Africa, population health, systematic review
Procedia PDF Downloads 1551196 Intrinsic Contradictions in Entrepreneurship Development and Self-Development
Authors: Revaz Gvelesiani
Abstract:
The problem of compliance between the state economic policy and entrepreneurial policy of businesses is primarily manifested in the contradictions related to the congruence between entrepreneurship development and self-development strategies. Among various types (financial, monetary, social, etc.) of the state economic policy aiming at the development of entrepreneurship, economic order policy is of special importance. Its goal is to set the framework for both public and private economic activities and achieve coherence between the societal value system and the formation of the economic order framework. Economic order policy, in its turn, involves intrinsic contradiction between the social and the competitive order. Competitive order is oriented on the principle of success, while social order _ on the criteria of need satisfaction, which contradicts, at least partly, to the principles of success. Thus within the economic order policy, on the one hand, the state makes efforts to form social order and expand its frontiers, while, on the other hand, market is determined to establish functioning competitive order and ensure its realization. Locating the adequate spaces for and setting the rational border between the state (social order) and the private (competitive order) activities, represents the phenomenon of the decisive importance from the entrepreneurship development strategy standpoint. In the countries where the above mentioned spaces and borders are “set” correctly, entrepreneurship agents (small, medium-sized and large businesses) achieve great success by means of seizing the respective segments and maintaining the leading positions in the internal, the European and the world markets for a long time. As for the entrepreneurship self-development strategy, above all, it involves: •market identification; •interactions with consumers; •continuous innovations; •competition strategy; •relationships with partners; •new management philosophy, etc. The analysis of compliance between the entrepreneurship strategy and entrepreneurship culture should be the reference point for any kind of internationalization in order to avoid shocks of cultural nature and the economic backwardness. Stabilization can be achieved only when the employee actions reflect the existing culture and the new contents of culture (targeted culture) is turned into the implicit consciousness of the personnel. The future leaders should learn how to manage different cultures. Entrepreneurship can be managed successfully if its strategy and culture are coherent. However, not rarely enterprises (organizations) show various forms of violation of both personal and team actions. If personal and team non-observances appear as the form of influence upon the culture, it will lead to global destruction of the system and structure. This is the entrepreneurship culture pathology that complicates to achieve compliance between the entrepreneurship strategy and entrepreneurship culture. Thus, the intrinsic contradictions of entrepreneurship development and self-development strategies complicate the task of reaching compliance between the state economic policy and the company entrepreneurship policy: on the one hand, there is a contradiction between the social and the competitive order within economic order policy and on the other hand, the contradiction exists between entrepreneurship strategy and entrepreneurship culture within entrepreneurship policy.Keywords: economic order policy, entrepreneurship, development contradictions, self-development contradictions
Procedia PDF Downloads 3281195 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 2531194 Mathematical Modeling of Nonlinear Process of Assimilation
Authors: Temur Chilachava
Abstract:
In work the new nonlinear mathematical model describing assimilation of the people (population) with some less widespread language by two states with two various widespread languages, taking into account demographic factor is offered. In model three subjects are considered: the population and government institutions with the widespread first language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the population and government institutions with the widespread second language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the third population (probably small state formation, an autonomy), exposed to bilateral assimilation from two rather powerful states. Earlier by us it was shown that in case of zero demographic factor of all three subjects, the population with less widespread language completely assimilates the states with two various widespread languages, and the result of assimilation (redistribution of the assimilated population) is connected with initial quantities, technological and economic capabilities of the assimilating states. In considered model taking into account demographic factor natural decrease in the population of the assimilating states and a natural increase of the population which has undergone bilateral assimilation is supposed. At some ratios between coefficients of natural change of the population of the assimilating states, and also assimilation coefficients, for nonlinear system of three differential equations are received the two first integral. Cases of two powerful states assimilating the population of small state formation (autonomy), with different number of the population, both with identical and with various economic and technological capabilities are considered. It is shown that in the first case the problem is actually reduced to nonlinear system of two differential equations describing the classical model "predator - the victim", thus, naturally a role of the victim plays the population which has undergone assimilation, and a predator role the population of one of the assimilating states. The population of the second assimilating state in the first case changes in proportion (the coefficient of proportionality is equal to the relation of the population of assimilators in an initial time point) to the population of the first assimilator. In the second case the problem is actually reduced to nonlinear system of two differential equations describing type model "a predator – the victim", with the closed integrated curves on the phase plane. In both cases there is no full assimilation of the population to less widespread language. Intervals of change of number of the population of all three objects of model are found. The considered mathematical models which in some approach can model real situations, with the real assimilating countries and the state formations (an autonomy or formation with the unrecognized status), undergone to bilateral assimilation, show that for them the only possibility to avoid from assimilation is the natural demographic increase in population and hope for natural decrease in the population of the assimilating states.Keywords: nonlinear mathematical model, bilateral assimilation, demographic factor, first integrals, result of assimilation, intervals of change of number of the population
Procedia PDF Downloads 470