Search results for: Computing Accreditation Committee
1063 An Investigation of the University Council’s Image: A Case of Suan Sunandha Rajabhat University
Authors: Phitsanu Phunphetchphan
Abstract:
The purposes of this research was to investigate opinions of Rajabhat University staff towards performance of the university council committee by focusing on (1) personal characteristics of the committees; (2) duties designated by the university council; and (3) relationship between university council and staff. The population of this study included all high level of management from Suan Sunandha Rajabhat University which made a total of 200 respondents. Data analysis included frequency, percentage, mean and standard deviation. The findings revealed that the majority of staff rated the performance of university council at a high level. The 'overall appropriate qualification of the university council' was rated as the highest score while 'good governance' was rated as the lowest mean score. Moreover, the findings also revealed that the relationship between university council’s members and the staff was rated at a high level while 'the integrity of policy implementation' was rated as the lowest score.Keywords: investigation, performance, university council, management
Procedia PDF Downloads 2491062 Cloud Resources Utilization and Science Teacher’s Effectiveness in Secondary Schools in Cross River State, Nigeria
Authors: Michael Udey Udam
Abstract:
Background: This study investigated the impact of cloud resources, a component of cloud computing, on science teachers’ effectiveness in secondary schools in Cross River State. Three (3) research questions and three (3) alternative hypotheses guided the study. Method: The descriptive survey design was adopted for the study. The population of the study comprised 1209 science teachers in public secondary schools of Cross River state. Sample: A sample of 487 teachers was drawn from the population using a stratified random sampling technique. The researcher-made structured questionnaire with 18 was used for data collection for the study. Research question one was answered using the Pearson Product Moment Correlation, while research question two and the hypotheses were answered using the Analysis of Variance (ANOVA) statistics in the Statistical Package for Social Sciences (SPSS) at a 0.05 level of significance. Results: The results of the study revealed that there is a positive correlation between the utilization of cloud resources in teaching and teaching effectiveness among science teachers in secondary schools in Cross River state; there is a negative correlation between gender and utilization of cloud resources among science teachers in secondary schools in Cross River state; and that there is a significant correlation between teaching experience and the utilization of cloud resources among science teachers in secondary schools in Cross River state. Conclusion: The study justifies the effectiveness of the Cross River state government policy of introducing cloud computing into the education sector. The study recommends that the policy should be sustained.Keywords: cloud resources, science teachers, effectiveness, secondary school
Procedia PDF Downloads 741061 The Effects of Different Doses of Caffeine on Young Futsal Players
Authors: Saead Rostami, Seyyed Hadi Hosseini Alavije, Aliakbar Torabi, Mohammad Bekhradi
Abstract:
This study is about The effects of different doses of caffeine on young Futsal players. Young futsal players of selected ShahinShahr(a city in Esfahan province, Iran) team are sampled (24 people of 18.3±1.9 year- old). All players are members of youth team playing in Esfahan counties league. Having at least 5 years of experience, 2 practices and 1 match per week and lacking any limitation in the past 6 months are the most important requirements for sampling the players. Next, the study topic, its method, its uses, as ell possible risks are explained to the players. They signed a consent letter to take part in the study. Interest in the use of caffeine as an ergogenic aid has increased since the International Olympic Committee lifted the partial ban on its use. Caffeine has beneficial effects on various aspects of athletic performance, but its effects on training have been neglected. The purpose of this study was to investigate the acute effect of caffeine on testosterone and cortisole in young futsal players.Keywords: anabolic, catabolic, performance, testosterone cortisol ratio, RAST test
Procedia PDF Downloads 3471060 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements
Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo
Abstract:
Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation
Procedia PDF Downloads 1781059 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 671058 Yawning Computing Using Bayesian Networks
Authors: Serge Tshibangu, Turgay Celik, Zenzo Ncube
Abstract:
Road crashes kill nearly over a million people every year, and leave millions more injured or permanently disabled. Various annual reports reveal that the percentage of fatal crashes due to fatigue/driver falling asleep comes directly after the percentage of fatal crashes due to intoxicated drivers. This percentage is higher than the combined percentage of fatal crashes due to illegal/Un-Safe U-turn and illegal/Un-Safe reversing. Although a relatively small percentage of police reports on road accidents highlights drowsiness and fatigue, the importance of these factors is greater than we might think, hidden by the undercounting of their events. Some scenarios show that these factors are significant in accidents with killed and injured people. Thus the need for an automatic drivers fatigue detection system in order to considerably reduce the number of accidents owing to fatigue.This research approaches the drivers fatigue detection problem in an innovative way by combining cues collected from both temporal analysis of drivers’ faces and environment. Monotony in driving environment is inter-related with visual symptoms of fatigue on drivers’ faces to achieve fatigue detection. Optical and infrared (IR) sensors are used to analyse the monotony in driving environment and to detect the visual symptoms of fatigue on human face. Internal cues from drivers faces and external cues from environment are combined together using machine learning algorithms to automatically detect fatigue.Keywords: intelligent transportation systems, bayesian networks, yawning computing, machine learning algorithms
Procedia PDF Downloads 4551057 FRATSAN: A New Software for Fractal Analysis of Signals
Authors: Hamidreza Namazi
Abstract:
Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure
Procedia PDF Downloads 4671056 Evaluation of Environmental and Social Management System of Green Climate Fund's Accredited Entities: A Qualitative Approach Applied to Environmental and Social System
Authors: Sima Majnooni
Abstract:
This paper discusses the Green Climate Fund's environmental and social management framework (GCF). The environmental and social management framework ensures the accredited entity considers the GCF's accreditation standards and effectively implements each of the GCF-funded projects. The GCF requires all accredited entities to meet basic transparency and accountability standards as well as environmental and social safeguards (ESMS). In doing so, the accredited entity sets up different independent units. One of these units is called the Grievance Mechanism. When allegations of environmental and social harms are raised in association with GCF-funded activities, affected parties can contact the entity’s grievance unit. One of the most challenging things about the accredited entity's grievance unit is the lack of available information and resources on the entities' websites. Many AEs have anti-corruption or anti-money laundering unit, but they do not have the environmental and social unit for affected people. This paper will argue the effectiveness of environmental and social grievance mechanisms of AEs by using a qualitative approach to indicate how many of AEs have a poor or an effective GRM. Some ESMSs seem highly effective. On the other hand, other mechanisms lack basic requirements such as a clear, transparent, uniform procedure and a definitive timetable. We have looked at each AE mechanism not only in light of how the website goes into detail regarding the process of grievance mechanism but also in light of their risk category. Many mechanisms appear inadequate for the lower level risk category entities (C) and, even surprisingly, for many higher-risk categories (A). We found; in most cases, the grievance mechanism of AEs seems vague.Keywords: grievance mechanism, vague environmental and social policies, green climate fund, international climate finance, lower and higher risk category
Procedia PDF Downloads 1241055 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights
Authors: Julian Wise
Abstract:
Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.Keywords: mineral technology, big data, machine learning operations, data lake
Procedia PDF Downloads 1121054 Revised Risk Priority Number in Failure Mode and Effects Analysis Model from the Perspective of Healthcare System
Authors: Fatemeh Rezaei, Mohammad H. Yarmohammadian, Masoud Ferdosi, Abbas Haghshnas
Abstract:
Background: Failure Modes and Effect Analysis is now having known as the main methods of risk assessment and the accreditation requirements for many organizations. The Risk Priority Number (RPN) approach is generally preferred, especially for its easiness of use. Indeed it does not require statistical data, but it is based on subjective evaluations given by the experts about the Occurrence (O i), the Severity (Si) and the Detectability (D i) of each cause of failure. Methods: This study is a quantitative – qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment was conducted to calculate RPN score. Results; We have studied patient’s journey process in surgery ward and the most important phase of the process determined Transport of the patient from the holding area to the operating room. Failures of the phase with the highest priority determined by defining inclusion criteria included severity (clinical effect, claim consequence, waste of time and financial loss), occurrence (time- unit occurrence and degree of exposure to risk) and preventability (degree of preventability and defensive barriers) and quantifying risks priority criteria in the context of RPN index. Ability of improved RPN reassess by root cause (RCA) analysis showed some variations. Conclusions: Finally, It could be concluded that understandable criteria should have been developed according to personnel specialized language and communication field. Therefore, participation of both technical and clinical groups is necessary to modify and apply these models.Keywords: failure mode, effects analysis, risk priority number(RPN), health system, risk assessment
Procedia PDF Downloads 3131053 Supervisory Board in the Governance of Cooperatives: Disclosing Power Elements in the Selection of Directors
Authors: Kari Huhtala, Iiro Jussila
Abstract:
The supervisory board is assumed to use power in the governance of a firm, but the actual use of power has been scantly investigated. The research question of the paper is “How does the supervisory board use power in the selection of the board of directors”. The data stem from 11 large Finnish agricultural cooperatives. The research approach was qualitative including semi-structured interviews of the board of directors and supervisory board chairpersons. The results were analyzed and interpreted against theories of social power. As a result, the use of power is approached from two perspectives: (1) formal position-based authority and (2) informal power. Central elements of power were the mandate of the supervisory board, the role of the supervisory board, the supervisory board chair, the nomination committee, collaboration between the supervisory board and the board of directors, the role of regions and the role of the board of directors. The study contributes to the academic discussion on corporate governance in cooperatives and on the supervisory board in the context of the two-tier model. Additional research of the model in other countries and of other types of cooperatives would further academic understanding of supervisory boards.Keywords: board, co-operative, supervisory board, selection, director
Procedia PDF Downloads 1731052 Corporate Governance and Corporate Sustainability: Evidence from a Developing Country
Authors: Edmund Gyimah
Abstract:
Using data from 146 annual reports of listed firms in Ghana for the period 2013-2020, this study presents indicative findings which inspire practical actions and future research. Firms which prepared and presented sustainability reports were excluded from this study for a coverage of corporate sustainability disclosures centred on annual reports. Also, corporate sustainability disclosures of the firms on corporate websites were not included in the study considering the tendency of updates which cannot easily be traced. The corporate sustainability disclosures in the annual reports since the commencement of the G4 Guidelines in 2013 have been below average for all the dimensions of sustainability and the general sustainability disclosures. Few traditional elements of the board composition such as board size and board independence could affect the corporate sustainability disclosures in the annual reports as well as the age of the firm, firm size, and industry classification of the firm. Sustainability disclosures are greater in sustainability reports than in annual reports, however, firms without sustainability reports should have a considerable amount of sustainability disclosures in their annual reports. Also, because of the essence of sustainability, this study suggests to firms to have sustainability committee perhaps, they could make a difference in disclosing the enough sustainability information even when they do not present sustainability information in stand-alone reports.Keywords: disclosures, sustainability, board, reports
Procedia PDF Downloads 1881051 Modernization of the Economic Price Adjustment Software
Authors: Roger L. Goodwin
Abstract:
The US Consumer Price Indices (CPIs) measures hundreds of items in the US economy. Many social programs and government benefits index to the CPIs. In mid to late 1990, much research went into changes to the CPI by a Congressional Advisory Committee. One thing can be said from the research is that, aside from there are alternative estimators for the CPI; any fundamental change to the CPI will affect many government programs. The purpose of this project is to modernize an existing process. This paper will show the development of a small, visual, software product that documents the Economic Price Adjustment (EPA) for long-term contracts. The existing workbook does not provide the flexibility to calculate EPAs where the base-month and the option-month are different. Nor does the workbook provide automated error checking. The small, visual, software product provides the additional flexibility and error checking. This paper presents the feedback to project.Keywords: Consumer Price Index, Economic Price Adjustment, contracts, visualization tools, database, reports, forms, event procedures
Procedia PDF Downloads 3171050 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 601049 The Complexities of Designing a Learning Programme in Higher Education with the End-User in Mind
Authors: Andre Bechuke
Abstract:
The quality of every learning programme in Higher Education (HE) is dependent on the planning, design, and development of the curriculum decisions. These curriculum development decisions are highly influenced by the knowledge of the end-user, who are not always just the students. When curriculum experts plan, design and develop learning programmes, they always have the end-users in mind throughout the process. Without proper knowledge of the end-user(s), the design and development of a learning programme might be flawed. Curriculum experts often struggle to determine who the real end-user is. As such, it is even more challenging to establish what needs to be known about the end user that should inform the plan, design, and development of a learning programme. This research sought suggest approaches to guide curriculum experts to identify the end-user(s), taking into consideration the pressure and influence other agencies and structures or stakeholders (industry, students, government, universities context, lecturers, international communities, professional regulatory bodies) have on the design of a learning programme and the graduates of the programmes. Considering the influence of these stakeholders, which is also very important, the task of deciding who the real end-user of the learning programme becomes very challenging. This study makes use of criteria 1 and 18 of the Council on Higher Education criteria for programme accreditation to guide the process of identifying the end-users when developing a learning programme. Criterion 1 suggests that designers must ensure that the programme is consonant with the institution’s mission, forms part of institutional planning and resource allocation, meets national requirements and the needs of students and other stakeholders, and is intellectually credible. According to criterion 18, in designing a learning programme, steps must be taken to enhance the employability of students and alleviate shortages of expertise in relevant fields. In conclusion, there is hardly ever one group of end-users to be considered for developing a learning programme, and the notion that students are the end-users is not true, especially when the graduates are unable to use the qualification for employment.Keywords: council on higher education, curriculum design and development, higher education, learning programme
Procedia PDF Downloads 811048 A Study on the Functional Safety Analysis of Stage Control System Based on International Electronical Committee 61508-2
Authors: Youn-Sung Kim, Hye-Mi Kim, Sang-Hoon Seo, Jaden Cha
Abstract:
This International standard IEC 61508 sets out a generic approach for all safety lifecycle activities for systems comprised of electrical/electronic/programmable electronic (E/E/PE) elements that are used to perform safety functions. The control unit in stage control system is safety related facilities to control state and speed for stage system running, and it performs safety-critical function by stage control system. The controller unit is part of safety loops corresponding to the IEC 61508 and classified as logic part in the safety loop. In this paper, we analyze using FMEDA (Failure Mode Effect and Diagnostic Analysis) to verification for fault tolerance methods and functional safety of control unit. Moreover, we determined SIL (Safety Integrity Level) for control unit according to the safety requirements defined in IEC 61508-2 based on an analyzed functional safety.Keywords: safety function, failure mode effect, IEC 61508-2, diagnostic analysis, stage control system
Procedia PDF Downloads 2781047 Impact Analysis of Quality Control Practices in Veterinary Diagnostic Labs in Lahore, Pakistan
Authors: Faiza Marrium, Masood Rabbani, Ali Ahmad Sheikh, Muhammad Yasin Tipu Javed Muhammad, Sohail Raza
Abstract:
More than 75% diseases spreading in the past 10 years in human population globally are linked to veterinary sector. Veterinary diagnostic labs are the powerful ally for diagnosis, prevention and monitoring of animal diseases in any country. In order to avoid detrimental effects of errors in disease diagnostic and biorisk management, there is a dire need to establish quality control system. In current study, 3 private and 6 public sectors veterinary diagnostic labs were selected for survey. A questionnaire survey in biorisk management guidelines of CWA 15793 was designed to find quality control breaches in lab design, personal, equipment and consumable, quality control measures adopted in lab, waste management, environmental monitoring and customer care. The data was analyzed through frequency distribution statistically by using (SPSS) version 18.0. A non-significant difference was found in all parameters of lab design, personal, equipment and consumable, quality control measures adopted in lab, waste management, environmental monitoring and customer care with an average percentage of 46.6, 57.77, 52.7, 55.5, 54.44, 48.88 and 60, respectively. A non-significant difference among all nine labs were found, with highest average compliance percentage of all parameters are lab 2 (78.13), Lab 3 (70.56), Lab 5 (57.51), Lab 6 (56.37), Lab 4 (55.02), Lab 9 (49.58), Lab 7 (47.76), Lab 1 (41.01) and Lab 8 (36.09). This study shows that in Lahore district veterinary diagnostic labs are not giving proper attention to quality of their system and there is no significant difference between setups of private and public sector laboratories. These results show that most of parameters are between 50 and 80 percent, which needs some work and improvement as per WHO criteria.Keywords: veterinary lab, quality management system, accreditation, regulatory body, disease identification
Procedia PDF Downloads 1461046 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 1081045 Dengue Death Review: A Tool to Adjudge the Cause of Dengue Mortality and Use of the Tool for Prevention of Dengue Deaths
Authors: Gagandeep Singh Grover, Vini Mahajan, Bhagmal, Priti Thaware, Jaspreet Takkar
Abstract:
Dengue is a mosquito-borne viral disease endemic in many countries in the tropics and sub-tropics. The state of Punjab in India shows cyclical and seasonal variation in dengue cases. The Case Fatality Rate of Dengue has ranged from 0.6 to 1.0 in the past years. The department has initiated a review of the cases that have died due to dengue in order to know the exact cause of the death in a case of dengue. The study has been undertaken to know the other associated co-morbidities and factors causing death in a case of dengue. The study used the predesigned proforma on which the records (medical and Lab) were recorded and reviewed by the expert committee of the doctors. This study has revealed that cases of dengue having co-morbidities have a longer stay in the hospital. Fluid overload and co-morbidities have been found as major factors leading to death, however, in a confirmed case of dengue hepatorenal shutdown was found to be a major cause of mortality. The data obtained will help in sensitizing the treating physicians in order to decrease the mortality due to dengue in future.Keywords: dengue, death, morbidities, DHF, DSS
Procedia PDF Downloads 3111044 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm
Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy
Abstract:
IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.Keywords: IoT, fog networks, data stewardship, dynamic access policy
Procedia PDF Downloads 591043 Adopt and Apply Research-Supported Standards and Practices to Ensure Quality for Online Education and Digital Learning at Course, Program and Institutional Levels
Authors: Yaping Gao
Abstract:
With the increasing globalization of education and the continued momentum and wider adoption of online and digital learning all over the world, post pandemic, how could best practices and extensive experience gained from the higher education community over the past few decades be adopted and adapted to benefit international communities, which can be vastly different culturally and pedagogically? How can schools and institutions adopt, adapt and apply these proven practices to develop strategic plans for digital transformation at institutional levels, and to improve or create quality online or digital learning environments at course and program levels to help all students succeed? The presenter will introduce the primary components of the US-based quality assurance process, including : 1) five sets of research-supported standards to guide the design, development and review of online and hybrid courses; 2) professional development offerings and pathways for administrators, faculty and instructional support staff; 3) a peer-review process for course/program reviews resulting in constructive recommendations for continuous improvement, certification of quality and international recognition; and 4) implementation of the quality assurance process on a continuum to program excellence, achievement of institutional goals, and facilitation of accreditation process and success. Regardless language, culture, pedagogical practices, or technological infrastructure, the core elements of quality teaching and learning remain the same across all delivery formats. What is unique is how to ensure quality of teaching and learning in online education and digital learning. No one knows all the answers to everything but no one needs to reinvent the wheel either. Together the international education community can support and learn from each other to achieve institutional goals and ensure all students succeed in the digital learning environments.Keywords: Online Education, Digital Learning, Quality Assurance, Standards and Best Practices
Procedia PDF Downloads 241042 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 1271041 Local Homology Modules
Authors: Fatemeh Mohammadi Aghjeh Mashhad
Abstract:
In this paper, we give several ways for computing generalized local homology modules by using Gorenstein flat resolutions. Also, we find some bounds for vanishing of generalized local homology modules.Keywords: a-adic completion functor, generalized local homology modules, Gorenstein flat modules
Procedia PDF Downloads 4191040 Heat Transfer and Diffusion Modelling
Authors: R. Whalley
Abstract:
The heat transfer modelling for a diffusion process will be considered. Difficulties in computing the time-distance dynamics of the representation will be addressed. Incomplete and irrational Laplace function will be identified as the computational issue. Alternative approaches to the response evaluation process will be provided. An illustration application problem will be presented. Graphical results confirming the theoretical procedures employed will be provided.Keywords: heat, transfer, diffusion, modelling, computation
Procedia PDF Downloads 5531039 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations
Authors: Deepak Singh, Rail Kuliev
Abstract:
The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization
Procedia PDF Downloads 701038 Government Policy over the Remuneration System of The Board of Commissioners in Indonesian Stated-Owned Enterprises
Authors: Synthia Atas Sari
Abstract:
The purpose of this paper is to examine the impact of reward system which determine by government over the work of Board of Commissioners to implement good corporate governance in Indonesian state-owned enterprises. To do so, this study analyzes the adequacy of the remuneration, the job attractiveness, and the board commitment and dedication with the remuneration system. Qualitative method used to examine the significant features and challenges to the government policy over the remuneration determination for the board of commissioners to their roles. Data gathered through semi-structure in-depth interview to the twenty-one participants over nine Indonesian stated-owned enterprises and written documents. Findings of this study indicate that government policies over the remuneration system is not effective to increase the performance of board of commissioners in implementing good corporate governance in Indonesian stated-owned enterprises due to unattractiveness of the remuneration amount, demotivate active members, and conflict interest over members of the remuneration committee.Keywords: reward system, board of commissioners, stated-owned enterprises, government policy
Procedia PDF Downloads 3341037 Low Enrollment in Civil Engineering Departments: Challenges and Opportunities
Authors: Alaa Yehia, Ayatollah Yehia, Sherif Yehia
Abstract:
There is a recurring issue of low enrollments across many civil engineering departments in postsecondary institutions. While there have been moments where enrollments begin to increase, civil engineering departments find themselves facing low enrollments at around 60% over the last five years across the Middle East. There are many reasons that could be attributed to this decline, such as low entry-level salaries, over-saturation of civil engineering graduates in the job market, and a lack of construction projects due to the impending or current recession. However, this recurring problem alludes to an intrinsic issue of the curriculum. The societal shift to the usage of high technology such as machine learning (ML) and artificial intelligence (AI) demands individuals who are proficient at utilizing it. Therefore, existing curriculums must adapt to this change in order to provide an education that is suitable for potential and current students. In this paper, In order to provide potential solutions for this issue, the analysis considers two possible implementations of high technology into the civil engineering curriculum. The first approach is to implement a course that introduces applications of high technology in Civil Engineering contexts. While the other approach is to intertwine applications of high technology throughout the degree. Both approaches, however, should meet requirements of accreditation agencies. In addition to the proposed improvement in civil engineering curriculum, a different pedagogical practice must be adapted as well. The passive learning approach might not be appropriate for Gen Z students; current students, now more than ever, need to be introduced to engineering topics and practice following different learning methods to ensure they will have the necessary skills for the job market. Different learning methods that incorporate high technology applications, like AI, must be integrated throughout the curriculum to make the civil engineering degree more attractive to prospective students. Moreover, the paper provides insight on the importance and approach of adapting the Civil Engineering curriculum to address the current low enrollment crisis that civil engineering departments globally, but specifically in the Middle East, are facing.Keywords: artificial intelligence (AI), civil engineering curriculum, high technology, low enrollment, pedagogy
Procedia PDF Downloads 1661036 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1901035 The Sustainable Cultural Tourism of Nakhon Si Thammarat Province in Thailand
Authors: Narong Anurak
Abstract:
The objectives of the study were to determine the factors influencing tourists’ destination decision making for cultural tourism in the southern provinces, to examine the potential for developing cultural tourism and to guideline for marketing strategy for cultural tourism in Nakhon Si Thammarat. Both quantitative and qualitative data were applied in this study. The samples of 400 cases for quantitative analysis were tourists who were interested in cultural tourism in the southern provinces, and traveled to cultural sites in Nakhon Si Thammarat, Surat Thani, and Phuket, and 14 representatives from provincial tourism committee of Nakhon Si Thammarat. The study found that Thai and foreign tourists are influenced by different important marketing mix factors (7Ps) when making decisions for cultural tourism in southern provinces. The important factors for Thai respondents were physical evidence, price, people, and place at high importance level, whereas, product, process, and promotion were moderate importance level as well.Keywords: marketing mix factors, Nakhon Si Thammarat province, sustainable cultural tourism, tourists decision making
Procedia PDF Downloads 2741034 Developing Active Learners and Efficient Users: A Study on the Implementation of Spoken Interaction Skill in the Malay Language Curriculum in Singapore
Authors: Pairah Bte Satariman
Abstract:
This study is carried out to evaluate Malay Language Curriculum for secondary schools in Singapore. The evaluation focuses on the implementation of Spoken Interaction Skill which was recommended by the Curriculum Review Committee in 2010. The study found that the students face difficulty in communicating interactively with others in their daily activities. The purpose of the study is to evaluate the results (products) on the implementation of this skill since 2011. The research used a qualitative method which includes oral test and interview with students and teachers teaching the subject. Preliminary findings show that generally, the students are not able to communicate interactively and fluently in the oral test unless they are given enough prompts. The teachers feel that the implementation of the skill is timely as students are more keen to use English in their daily communication even in Malay Language Classes. Teachers also mentioned the challenges in the implementation such as insufficient curriculum time and teaching materials.Keywords: evaluation, Malay language curriculum, spoken interaction skills, communication, implementation
Procedia PDF Downloads 146