Search results for: heading time
14554 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence
Authors: Nasser Salah Eldin Mohammed Salih Shebka
Abstract:
Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic
Procedia PDF Downloads 11314553 Received Signal Strength Indicator Based Localization of Bluetooth Devices Using Trilateration: An Improved Method for the Visually Impaired People
Authors: Muhammad Irfan Aziz, Thomas Owens, Uzair Khaleeq uz Zaman
Abstract:
The instantaneous and spatial localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles, is the most demanding and challenging issue faced by the navigation systems today. Since Bluetooth cannot utilize techniques like Time Difference of Arrival (TDOA) and Time of Arrival (TOA), it uses received signal strength indicator (RSSI) to measure Receive Signal Strength (RSS). The measurements using RSSI can be improved significantly by improving the existing methodologies related to RSSI. Therefore, the current paper focuses on proposing an improved method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the method, class 2 Bluetooth devices were used along with the development of a software. Experiments were then conducted to obtain surface plots that showed the signal interferences and other environmental effects. Finally, the results obtained show the surface plots for all Bluetooth modules used along with the strong and weak points depicted as per the color codes in red, yellow and blue. It was concluded that the suggested improved method of measuring RSS using trilateration helped to not only measure signal strength affectively but also highlighted how the signal strength can be influenced by atmospheric conditions such as noise, reflections, etc.Keywords: Bluetooth, indoor/outdoor localization, received signal strength indicator, visually impaired
Procedia PDF Downloads 13414552 Jejunostomy and Protective Ileostomy in a Patient with Massive Necrotizing Enterocolitis: A Case Report
Authors: Rafael Ricieri, Rogerio Barros
Abstract:
Objective: This study is to report a case of massive necrotizing enterocolitis in a six-month-old patient, requiring ileostomy and protective jejunostomy as a damage control measure in the first exploratory laparotomy surgery in massive enterocolitis without a previous diagnosis. Methods: This study is a case report of success in making and closing a protective jejunostomy. However, the low number of publications on this staged and risky measure of surgical resolution encouraged the team to study the indication and especially the correct time for closing the patient's protective jejunostomy. The main study instrument will be the six-month-old patient's medical record. Results: Based on the observation of the case described, it was observed that the time for the closure of the described procedure (protective jejunostomy) varies according to the level of compromise of the health status of your patient and of an individual of each person. Early closure, or failure to close, can lead to a favorable problem for the patient since several problems can result from this closure, such as new intestinal perforations, hydroelectrolyte disturbances. Despite the risk of new perforations, we suggest closing the protective jejunostomy around the 14th day of the procedure, thus keeping the patient on broad-spectrum antibiotic therapy and absolute fasting, thus reducing the chances of new intestinal perforations. Associated with the closure of the jejunostomy, a gastric tube for decompression is necessary, and care in an intensive care unit and electrolyte replacement is necessary to maintain the stability of the case.Keywords: jejunostomy, ileostomy, enterocolitis, pediatric surgery, gastric surgery
Procedia PDF Downloads 8414551 Strategies for Good Governance during Crisis in Higher Education
Authors: Naziema B. Jappie
Abstract:
Over the last 23 years leaders in government, political parties and universities have been spending much time on identifying and discussing various gaps in the system that impact systematically on students especially those from historically Black communities. Equity and access to higher education were two critical aspects that featured in achieving the transformation goals together with a funding model for those previously disadvantaged. Free education was not a feasible option for the government. Institutional leaders in higher education face many demands on their time and resources. Often, the time for crisis management planning or consideration of being proactive and preventative is not a standing agenda item. With many issues being priority in academia, people become complacent and think that crisis may not affect them or they will cross the bridge when they get to it. Historically South Africa has proven to be a country of militancy, strikes and protests in most industries, some leading to disastrous outcomes. Higher education was not different between October 2015 and late 2016 when the #Rhodes Must Fall which morphed into the # Fees Must Fall protest challenged the establishment, changed the social fabric of universities, bringing the sector to a standstill. Some institutional leaders and administrators were better at handling unexpected, high-consequence situations than others. At most crisis leadership is viewed as a situation more than a style of leadership which is usually characterized by crisis management. The objective of this paper is to show how institutions managed catastrophes of disastrous proportions, down through unexpected incidents of 2015/2016. The content draws on the vast past crisis management experience of the presenter and includes the occurrences of the recent protests giving an event timeline. Using responses from interviews with institutional leaders and administrators as well as students will ensure first-hand information on their experiences and the outcomes. Students have tasted the power of organized action and they demand immediate change, if not the revolt will continue. This paper will examine the approaches that guided institutional leaders and their crisis teams and sector crisis response. It will further expand on whether the solutions effectively changed governance in higher education or has it minimized the need for more protests. The conclusion will give an insight into the future of higher education in South Africa from a leadership perspective.Keywords: crisis, governance, intervention, leadership, strategies, protests
Procedia PDF Downloads 14714550 Northern Nigeria Vaccine Direct Delivery System
Authors: Evelyn Castle, Adam Thompson
Abstract:
Background: In 2013, the Kano State Primary Health Care Management Board redesigned its Routine immunization supply chain from diffused pull to direct delivery push. It addressed issues around stockouts and reduced time spent by health facility staff collecting, and reporting on vaccine usage. The health care board sought the help of a 3PL for twice-monthly deliveries from its cold store to 484 facilities across 44 local governments. eHA’s Health Delivery Systems group formed a 3PL to serve 326 of these new facilities in partnership with the State. We focused on designing and implementing a technology system throughout. Basic methodologies: GIS Mapping: - Planning the delivery of vaccines to hundreds of health facilities requires detailed route planning for delivery vehicles. Mapping the road networks across Kano and Bauchi with a custom routing tool provided information for the optimization of deliveries. Reducing the number of kilometers driven each round by 20%, - reducing cost and delivery time. Direct Delivery Information System: - Vaccine Direct Deliveries are facilitated through pre-round planning (driven by health facility database, extensive GIS, and inventory workflow rules), manager and driver control panel customizing delivery routines and reporting, progress dashboard, schedules/routes, packing lists, delivery reports, and driver data collection applications. Move: Last Mile Logistics Management System: - MOVE has improved vaccine supply information management to be timely, accurate and actionable. Provides stock management workflow support, alerts management for cold chain exceptions/stock outs, and on-device analytics for health and supply chain staff. Software was built to be offline-first with user-validated interface and experience. Deployed to hundreds of vaccine storage site the improved information tools helps facilitate the process of system redesign and change management. Findings: - Stock-outs reduced from 90% to 33% - Redesigned current health systems and managing vaccine supply for 68% of Kano’s wards. - Near real time reporting and data availability to track stock. - Paperwork burdens of health staff have been dramatically reduced. - Medicine available when the community needs it. - Consistent vaccination dates for children under one to prevent polio, yellow fever, tetanus. - Higher immunization rates = Lower infection rates. - Hundreds of millions of Naira worth of vaccines successfully transported. - Fortnightly service to 326 facilities in 326 wards across 30 Local Government areas. - 6,031 cumulative deliveries. - Over 3.44 million doses transported. - Minimum travel distance covered in a round of delivery is 2000 kms & maximum of 6297 kms. - 153,409 kms travelled by 6 drivers. - 500 facilities in 326 wards. - Data captured and synchronized for the first time. - Data driven decision making now possible. Conclusion: eHA’s Vaccine Direct delivery has met challenges in Kano and Bauchi State and provided a reliable delivery service of vaccinations that ensure t health facilities can run vaccination clinics for children under one. eHA uses innovative technology that delivers vaccines from Northern Nigerian zonal stores straight to healthcare facilities. Helped healthcare workers spend less time managing supplies and more time delivering care, and will be rolled out nationally across Nigeria.Keywords: direct delivery information system, health delivery system, GIS mapping, Northern Nigeria, vaccines
Procedia PDF Downloads 37314549 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems
Authors: Rodolfo Lorbieski, Silvia Modesto Nassar
Abstract:
Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.Keywords: stacking, multi-layers, ensemble, multi-class
Procedia PDF Downloads 26914548 Thermal-Mechanical Analysis of a Bridge Deck to Determine Residual Weld Stresses
Authors: Evy Van Puymbroeck, Wim Nagy, Ken Schotte, Heng Fang, Hans De Backer
Abstract:
The knowledge of residual stresses for welded bridge components is essential to determine the effect of the residual stresses on the fatigue life behavior. The residual stresses of an orthotropic bridge deck are determined by simulating the welding process with finite element modelling. The stiffener is placed on top of the deck plate before welding. A chained thermal-mechanical analysis is set up to determine the distribution of residual stresses for the bridge deck. First, a thermal analysis is used to determine the temperatures of the orthotropic deck for different time steps during the welding process. Twin wire submerged arc welding is used to construct the orthotropic plate. A double ellipsoidal volume heat source model is used to describe the heat flow through a material for a moving heat source. The heat input is used to determine the heat flux which is applied as a thermal load during the thermal analysis. The heat flux for each element is calculated for different time steps to simulate the passage of the welding torch with the considered welding speed. This results in a time dependent heat flux that is applied as a thermal loading. Thermal material behavior is specified by assigning the properties of the material in function of the high temperatures during welding. Isotropic hardening behavior is included in the model. The thermal analysis simulates the heat introduced in the two plates of the orthotropic deck and calculates the temperatures during the welding process. After the calculation of the temperatures introduced during the welding process in the thermal analysis, a subsequent mechanical analysis is performed. For the boundary conditions of the mechanical analysis, the actual welding conditions are considered. Before welding, the stiffener is connected to the deck plate by using tack welds. These tack welds are implemented in the model. The deck plate is allowed to expand freely in an upwards direction while it rests on a firm and flat surface. This behavior is modelled by using grounded springs. Furthermore, symmetry points and lines are used to prevent the model to move freely in other directions. In the thermal analysis, a mechanical material model is used. The calculated temperatures during the thermal analysis are introduced during the mechanical analysis as a time dependent load. The connection of the elements of the two plates in the fusion zone is realized with a glued connection which is activated when the welding temperature is reached. The mechanical analysis results in a distribution of the residual stresses. The distribution of the residual stresses of the orthotropic bridge deck is compared with results from literature. Literature proposes uniform tensile yield stresses in the weld while the finite element modelling showed tensile yield stresses at a short distance from the weld root or the weld toe. The chained thermal-mechanical analysis results in a distribution of residual weld stresses for an orthotropic bridge deck. In future research, the effect of these residual stresses on the fatigue life behavior of welded bridge components can be studied.Keywords: finite element modelling, residual stresses, thermal-mechanical analysis, welding simulation
Procedia PDF Downloads 17114547 Accelerated Molecular Simulation: A Convolution Approach
Authors: Jannes Quer, Amir Niknejad, Marcus Weber
Abstract:
Computational Drug Design is often based on Molecular Dynamics simulations of molecular systems. Molecular Dynamics can be used to simulate, e.g., the binding and unbinding event of a small drug-like molecule with regard to the active site of an enzyme or a receptor. However, the time-scale of the overall binding event is many orders of magnitude longer than the time-scale of simulation. Thus, there is a need to speed-up molecular simulations. In order to speed up simulations, the molecular dynamics trajectories have to be ”steared” out of local minimizers of the potential energy surface – the so-called metastabilities – of the molecular system. Increasing the kinetic energy (temperature) is one possibility to accelerate simulated processes. However, with temperature the entropy of the molecular system increases, too. But this kind ”stearing” is not directed enough to stear the molecule out of the minimum toward the saddle point. In this article, we give a new mathematical idea, how a potential energy surface can be changed in such a way, that entropy is kept under control while the trajectories are still steared out of the metastabilities. In order to compute the unsteared transition behaviour based on a steared simulation, we propose to use extrapolation methods. In the end we mathematically show, that our method accelerates the simulations along the direction, in which the curvature of the potential energy surface changes the most, i.e., from local minimizers towards saddle points.Keywords: extrapolation, Eyring-Kramers, metastability, multilevel sampling
Procedia PDF Downloads 32814546 Energy Management Method in DC Microgrid Based on the Equivalent Hydrogen Consumption Minimum Strategy
Authors: Ying Han, Weirong Chen, Qi Li
Abstract:
An energy management method based on equivalent hydrogen consumption minimum strategy is proposed in this paper aiming at the direct-current (DC) microgrid consisting of photovoltaic cells, fuel cells, energy storage devices, converters and DC loads. The rational allocation of fuel cells and battery devices is achieved by adopting equivalent minimum hydrogen consumption strategy with the full use of power generated by photovoltaic cells. Considering the balance of the battery’s state of charge (SOC), the optimal power of the battery under different SOC conditions is obtained and the reference output power of the fuel cell is calculated. And then a droop control method based on time-varying droop coefficient is proposed to realize the automatic charge and discharge control of the battery, balance the system power and maintain the bus voltage. The proposed control strategy is verified by RT-LAB hardware-in-the-loop simulation platform. The simulation results show that the designed control algorithm can realize the rational allocation of DC micro-grid energy and improve the stability of system.Keywords: DC microgrid, equivalent minimum hydrogen consumption strategy, energy management, time-varying droop coefficient, droop control
Procedia PDF Downloads 30314545 A Mathematical Analysis of a Model in Capillary Formation: The Roles of Endothelial, Pericyte and Macrophages in the Initiation of Angiogenesis
Authors: Serdal Pamuk, Irem Cay
Abstract:
Our model is based on the theory of reinforced random walks coupled with Michealis-Menten mechanisms which view endothelial cell receptors as the catalysts for transforming both tumor and macrophage derived tumor angiogenesis factor (TAF) into proteolytic enzyme which in turn degrade the basal lamina. The model consists of two main parts. First part has seven differential equations (DE’s) in one space dimension over the capillary, whereas the second part has the same number of DE’s in two space dimensions in the extra cellular matrix (ECM). We connect these two parts via some boundary conditions to move the cells into the ECM in order to initiate capillary formation. But, when does this movement begin? To address this question we estimate the thresholds that activate the transport equations in the capillary. We do this by using steady-state analysis of TAF equation under some assumptions. Once these equations are activated endothelial, pericyte and macrophage cells begin to move into the ECM for the initiation of angiogenesis. We do believe that our results play an important role for the mechanisms of cell migration which are crucial for tumor angiogenesis. Furthermore, we estimate the long time tendency of these three cells, and find that they tend to the transition probability functions as time evolves. We provide our numerical solutions which are in good agreement with our theoretical results.Keywords: angiogenesis, capillary formation, mathematical analysis, steady-state, transition probability function
Procedia PDF Downloads 15614544 A Study on the Improvement of Mobile Device Call Buzz Noise Caused by Audio Frequency Ground Bounce
Authors: Jangje Park, So Young Kim
Abstract:
The market demand for audio quality in mobile devices continues to increase, and audible buzz noise generated in time division communication is a chronic problem that goes against the market demand. In the case of time division type communication, the RF Power Amplifier (RF PA) is driven at the audio frequency cycle, and it makes various influences on the audio signal. In this paper, we measured the ground bounce noise generated by the peak current flowing through the ground network in the RF PA with the audio frequency; it was confirmed that the noise is the cause of the audible buzz noise during a call. In addition, a grounding method of the microphone device that can improve the buzzing noise was proposed. Considering that the level of the audio signal generated by the microphone device is -38dBV based on 94dB Sound Pressure Level (SPL), even ground bounce noise of several hundred uV will fall within the range of audible noise if it is induced by the audio amplifier. Through the grounding method of the microphone device proposed in this paper, it was confirmed that the audible buzz noise power density at the RF PA driving frequency was improved by more than 5dB under the conditions of the Printed Circuit Board (PCB) used in the experiment. A fundamental improvement method was presented regarding the buzzing noise during a mobile phone call.Keywords: audio frequency, buzz noise, ground bounce, microphone grounding
Procedia PDF Downloads 13614543 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 16714542 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model
Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.
Procedia PDF Downloads 14014541 Viability Study of the Use of Solar Energy for Water Heating in Homes in Brazil
Authors: Elmo Thiago Lins Cöuras Ford, Valentina Alessandra Carvalho do Vale
Abstract:
The sun is an inexhaustible source and harnessing its potential both for heating and for power generation is one of the most promising and necessary alternatives, mainly due to environmental issues. However, it should be noted that this has always been present in the generation of energy on the planet, only indirectly, as it is responsible for virtually all other energy sources, such as: Generates the evaporation source of the water cycle, which allows the impoundment and the consequent generation of electricity (hydroelectricity); Winds are caused by large-scale atmospheric induction caused by solar radiation; Oil, coal and natural gas were generated from waste plants and animals that originally obtained the energy needed for its development of solar radiation. Thus, the idea of using solar energy for practical purposes for the benefit of man is not new, as it accompanies the story since the beginning of time, which means that the sun was always of utmost importance in the design of shelters, or homes is, constructed by taking into consideration the use of sunlight, practicing what was being lost through the centuries, until a time when the buildings started to be designed completely independent of the sun. However, the climatic rigors still needed to be fought, only artificially and today seen as unsustainable, with additional facilities fueled by energy consumption. This paper presents a study on the feasibility of using solar energy for heating water in homes, developing a simplified methodology covering the mode of operation of solar water heaters, solar potential existing alternative systems of Brazil, the international market, and barriers encountered.Keywords: solar energy, solar heating, solar project, water heating
Procedia PDF Downloads 33214540 Rhythm-Reading Success Using Conversational Solfege
Authors: Kelly Jo Hollingsworth
Abstract:
Conversational Solfege, a research-based, 12-step music literacy instructional method using the sound-before-sight approach, was used to teach rhythm-reading to 128-second grade students at a public school in the southeastern United States. For each step, multiple scripted techniques are supplied to teach each skill. Unit one was the focus of this study, which is quarter note and barred eighth note rhythms. During regular weekly music instruction, students completed method steps one through five, which includes aural discrimination, decoding familiar and unfamiliar rhythm patterns, and improvising rhythmic phrases using quarter notes and barred eighth notes. Intact classes were randomly assigned to two treatment groups for teaching steps six through eight, which was the visual presentation and identification of quarter notes and barred eighth notes, visually presenting and decoding familiar patterns, and visually presenting and decoding unfamiliar patterns using said notation. For three weeks, students practiced steps six through eight during regular weekly music class. One group spent five-minutes of class time on steps six through eight technique work, while the other group spends ten-minutes of class time practicing the same techniques. A pretest and posttest were administered, and ANOVA results reveal both the five-minute (p < .001) and ten-minute group (p < .001) reached statistical significance suggesting Conversational Solfege is an efficient, effective approach to teach rhythm-reading to second grade students. After two weeks of no instruction, students were retested to measure retention. Using a repeated-measures ANOVA, both groups reached statistical significance (p < .001) on the second posttest, suggesting both the five-minute and ten-minute group retained rhythm-reading skill after two weeks of no instruction. Statistical significance was not reached between groups (p=.252), suggesting five-minutes is equally as effective as ten-minutes of rhythm-reading practice using Conversational Solfege techniques. Future research includes replicating the study with other grades and units in the text.Keywords: conversational solfege, length of instructional time, rhythm-reading, rhythm instruction
Procedia PDF Downloads 15714539 Influence of Densification Process and Material Properties on Final Briquettes Quality from FastGrowing Willows
Authors: Peter Križan, Juraj Beniak, Ľubomír Šooš, Miloš Matúš
Abstract:
Biomass treatment through densification is very suitable and important technology before its effective energy recovery. Densification process of biomass is significantly influenced by various technological and also material parameters which are ultimately reflected on the final solid Biofuels quality. The paper deals with the experimental research of the relationship between technological and material parameters during densification of fast-growing trees, roundly fast-rowing willow. The main goal of presented experimental research is to determine the relationship between pressing pressure raw material fraction size from a final briquettes density point of view. Experimental research was realized by single-axis densification. The impact of fraction size with interaction of pressing pressure and stabilization time on the quality properties of briquettes was determined. These parameters interaction affects the final solid biofuels (briquettes) quality. From briquettes production point of view and also from densification machines constructions point of view is very important to know about mutual interaction of these parameters on final briquettes quality. The experimental findings presented here are showing the importance of mentioned parameters during the densification process.Keywords: briquettes density, densification, fraction size, pressing pressure, stabilization time
Procedia PDF Downloads 36814538 Workforce Optimization: Fair Workload Balance and Near-Optimal Task Execution Order
Authors: Alvaro Javier Ortega
Abstract:
A large number of companies face the challenge of matching highly-skilled professionals to high-end positions by human resource deployment professionals. However, when the professional list and tasks to be matched are larger than a few dozens, this process result is far from optimal and takes a long time to be made. Therefore, an automated assignment algorithm for this workforce management problem is needed. The majority of companies are divided into several sectors or departments, where trained employees with different experience levels deal with a large number of tasks daily. Also, the execution order of all tasks is of mater consequence, due to some of these tasks just can be run it if the result of another task is provided. Thus, a wrong execution order leads to large waiting times between consecutive tasks. The desired goal is, therefore, creating accurate matches and a near-optimal execution order that maximizes the number of tasks performed and minimizes the idle time of the expensive skilled employees. The problem described before can be model as a mixed-integer non-linear programming (MINLP) as it will be shown in detail through this paper. A large number of MINLP algorithms have been proposed in the literature. Here, genetic algorithm solutions are considered and a comparison between two different mutation approaches is presented. The simulated results considering different complexity levels of assignment decisions show the appropriateness of the proposed model.Keywords: employees, genetic algorithm, industry management, workforce
Procedia PDF Downloads 16814537 Satisfaction Level of Teachers on the Human Resource Management Practices
Authors: Mark Anthony A. Catiil
Abstract:
Teachers are the principal actors in the delivery of quality education to the learners. Unfortunately, as time goes by, some of them got low motivation at work. Absenteeism, tardiness, under time, and non-compliance to school policies are some of the end results. There is, therefore, a need to review the different human resource management practices of the school that contribute to teachers’ work satisfaction and motivation. Hence, this study determined the level of satisfaction of teachers on the human resource management practices of Gingoog City Comprehensive National High School. This mixed-methodology research was focused on the 45 teachers chosen using a stratified random sampling technique. Reliability-tested questionnaires, interviews, and focus group discussions were used to gather the data. Results revealed that the majority of the respondents are female, Teacher I, with MA units and have served for 11-20 years. Likewise, among the human resource management practices of the school, the respondents rated the lowest satisfaction on recruitment and selection (mean=2.15; n=45). This could mean that most of the recruitment and selection practices of the school are not well communicated, disseminated, and implemented. On the other hand, retirement practices of the school were rated with the highest satisfaction among the respondents (mean=2.73; n=45). This could mean that most of the retirement practices of the school are communicated, disseminated, implemented, and functional. It was recommended that the existing human resource management practices on recruitment and selection be reviewed to find out its deficiencies and possible improvement. Moreover, future researchers may also conduct a study between private and public schools in Gingoog City on the same topic for comparison.Keywords: education, human resource management practices, satisfaction, teachers
Procedia PDF Downloads 12814536 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection
Authors: Muhammad Ali
Abstract:
Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection
Procedia PDF Downloads 12514535 Assessment of Adsorption Properties of Neem Leaves Wastes for the Removal of Congo Red and Methyl Orange
Authors: Muhammad B. Ibrahim, Muhammad S. Sulaiman, Sadiq Sani
Abstract:
Neem leaves were studied as plant wastes derived adsorbents for detoxification of Congo Red (CR) and Methyl Orange (MO) from aqueous solutions using batch adsorption technique. The objectives involved determining the effects of the basic adsorption parameters are namely, agitation time, adsorbent dosage, adsorbents particle size, adsorbate loading concentrations and initial pH, on the adsorption process as well as characterizing the adsorbents by determining their physicochemical properties, functional groups responsible for the adsorption process using Fourier Transform Infrared (FTIR) spectroscopy and surface morphology using scanning electron microscopy (SEM) coupled with energy dispersion X – ray spectroscopy (EDS). The adsorption behaviours of the materials were tested against Langmuir, Freundlich, etc. isotherm models. Percent adsorption increased with increase in agitation time (5 – 240 minutes), adsorbent dosage (100-500mg), initial concentration (100-300mg/L), and with decrease in particle size (≥75μm to ≤300μm) of the adsorbents. Both processes are dye pH-dependent, increasing or decreasing percent adsorption in acidic (2-6) or alkaline (8-12) range over the studied pH (2-12) range. From the experimental data the Langmuir’s separation factor (RL) suggests unfavourable adsorption for all processes, Freundlich constant (nF) indicates unfavourable process for CR and MO adsorption; while the mean free energy of adsorptionKeywords: adsorption, congo red, methyl orange, neem leave
Procedia PDF Downloads 36514534 Efficacy and Safety of Sublingual Sufentanil for the Management of Acute Pain
Authors: Neil Singla, Derek Muse, Karen DiDonato, Pamela Palmer
Abstract:
Introduction: Pain is the most common reason people visit emergency rooms. Studies indicate however, that Emergency Department (ED) physicians often do not provide adequate analgesia to their patients as a result of gender and age bias, opiophobia and insufficient knowledge of and formal training in acute pain management. Novel classes of analgesics have recently been introduced, but many patients suffer from acute pain in settings where the availability of intravenous (IV) access may be limited, so there remains a clinical need for rapid-acting, potent analgesics that do not require an invasive route of delivery. A sublingual sufentanil tablet (SST), dispensed using a single-dose applicator, is in development for treatment of moderate-to-severe acute pain in a medically-supervised setting. Objective: The primary objective of this study was to demonstrate the repeat-dose efficacy, safety and tolerability of sufentanil 20 mcg and 30 mcg sublingual tablets compared to placebo for the management of acute pain as determined by the time-weighted sum of pain intensity differences (SPID) to baseline over the 12-hour study period (SPID12). Key secondary efficacy variables included SPID over the first hour (SPID1), Total pain relief over the 12-hour study period (TOTPAR12), time to perceived pain relief (PR) and time to meaningful PR. Safety variables consisted of adverse events (AE), vital signs, oxygen saturation and early termination. Methods: In this Phase 2, double-blind, dose-finding study, an equal number of male and female patients were randomly assigned in a 2:2:1 ratio to SST 20 mcg, SS 30 mcg or placebo, respectively, following bunionectomy. Study drug was dosed as needed, but not more frequently than hourly. Rescue medication was available as needed. The primary endpoint was the Summed Pain Intensity Difference to baseline over 12h (SPIDI2). Safety was assessed by continuous oxygen saturation monitoring and adverse event reporting. Results: 101 patients (51 Male/50 Female) were randomized, 100 received study treatment (intent-to-treat [ITT] population), and 91 completed the study. Reasons for early discontinuation were lack of efficacy (6), adverse events (2) and drug-dosing error (1). Mean age was 42.5 years. For the ITT population, SST 30 mcg was superior to placebo (p=0.003) for the SPID12. SPID12 scores in the active groups were superior for both male (ANOVA overall p-value =0.038) and female (ANOVA overall p-value=0.005) patients. Statistically significant differences in favour of sublingual sufentanil were also observed between the SST 30mcg and placebo group for SPID1(p<0.001), TOTPAR12(p=0.002), time to perceived PR (p=0.023) and time to meaningful PR (p=0.010). Nausea, vomiting and somnolence were more frequent in the sufentanil groups but there were no significant differences between treatment arms for the proportion of patients who prematurely terminated due to AE or inadequate analgesia. Conclusions: Sufentanil tablets dispensed sublingually using a single-dose applicator is in development for treatment of patients with moderate-to-severe acute pain in a medically-supervised setting where immediate IV access is limited. When administered sublingually, sufentanil’s pharmacokinetic profile and non-invasive delivery makes it a useful alternative to IM or IV dosing.Keywords: acute pain, pain management, sublingual, sufentanil
Procedia PDF Downloads 35614533 Control of Biofilm Formation and Inorganic Particle Accumulation on Reverse Osmosis Membrane by Hypochlorite Washing
Authors: Masaki Ohno, Cervinia Manalo, Tetsuji Okuda, Satoshi Nakai, Wataru Nishijima
Abstract:
Reverse osmosis (RO) membranes have been widely used for desalination to purify water for drinking and other purposes. Although at present most RO membranes have no resistance to chlorine, chlorine-resistant membranes are being developed. Therefore, direct chlorine treatment or chlorine washing will be an option in preventing biofouling on chlorine-resistant membranes. Furthermore, if particle accumulation control is possible by using chlorine washing, expensive pretreatment for particle removal can be removed or simplified. The objective of this study was to determine the effective hypochlorite washing condition required for controlling biofilm formation and inorganic particle accumulation on RO membrane in a continuous flow channel with RO membrane and spacer. In this study, direct chlorine washing was done by soaking fouled RO membranes in hypochlorite solution and fluorescence intensity was used to quantify biofilm on the membrane surface. After 48 h of soaking the membranes in high fouling potential waters, the fluorescence intensity decreased to 0 from 470 using the following washing conditions: 10 mg/L chlorine concentration, 2 times/d washing interval, and 30 min washing time. The chlorine concentration required to control biofilm formation decreased as the chlorine concentration (0.5–10 mg/L), the washing interval (1–4 times/d), or the washing time (1–30 min) increased. For the sample solutions used in the study, 10 mg/L chlorine concentration with 2 times/d interval, and 5 min washing time was required for biofilm control. The optimum chlorine washing conditions obtained from soaking experiments proved to be applicable also in controlling biofilm formation in continuous flow experiments. Moreover, chlorine washing employed in controlling biofilm with suspended particles resulted in lower amounts of organic (0.03 mg/cm2) and inorganic (0.14 mg/cm2) deposits on the membrane than that for sample water without chlorine washing (0.14 mg/cm2 and 0.33 mg/cm2, respectively). The amount of biofilm formed was 79% controlled by continuous washing with 10 mg/L of free chlorine concentration, and the inorganic accumulation amount decreased by 58% to levels similar to that of pure water with kaolin (0.17 mg/cm2) as feed water. These results confirmed the acceleration of particle accumulation due to biofilm formation, and that the inhibition of biofilm growth can almost completely reduce further particle accumulation. In addition, effective hypochlorite washing condition which can control both biofilm formation and particle accumulation could be achieved.Keywords: reverse osmosis, washing condition optimization, hypochlorous acid, biofouling control
Procedia PDF Downloads 35214532 The Relationship between the Use of Social Networks with Executive Functions and Academic Performance in High School Students in Tehran
Authors: Esmail Sadipour
Abstract:
The use of social networks is increasing day by day in all societies. The purpose of this research was to know the relationship between the use of social networks (Instagram, WhatsApp, and Telegram) with executive functions and academic performance in first-year female high school students. This research was applied in terms of purpose, quantitative in terms of data type, and correlational in terms of technique. The population of this research consisted of all female high school students in the first year of district 2 of Tehran. Using Green's formula, the sample size of 150 people was determined and selected by cluster random method. In this way, from all 17 high schools in district 2 of Tehran, 5 high schools were selected by a simple random method and then one class was selected from each high school, and a total of 155 students were selected. To measure the use of social networks, a researcher-made questionnaire was used, the Barclay test (2012) was used for executive functions, and last semester's GPA was used for academic performance. Pearson's correlation coefficient and multivariate regression were used to analyze the data. The results showed that there is a negative relationship between the amount of use of social networks and self-control, self-motivation and time self-management. In other words, the more the use of social networks, the fewer executive functions of students, self-control, self-motivation, and self-management of their time. Also, with the increase in the use of social networks, the academic performance of students has decreased.Keywords: social networks, executive function, academic performance, working memory
Procedia PDF Downloads 9614531 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy
Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay
Abstract:
Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.Keywords: trauma, coagulopathy, prediction, model
Procedia PDF Downloads 17614530 The Multipurpose Usage of Livestock Animal Dungs for Food Production in Gwagwalada Area Council of the Federal Capital Territory, Abuja Nigeria
Authors: Michael Adedotun Oke
Abstract:
This paper, therefore, under study the various multiplier usages of the different Animal Dungs, from the animals such as Rabbits, Cows, Fishes, Sheep, and Poultry manure in the areas council of the Federal Capital Territory Abuja, Nigeria. Thus the various observations, with the pictorial representation, that was taken with the field survey from the different farms in Gwagawalada. Shows that the rabbits dungs are being used in some of the vegetables and crop farms, which serves as the nutrients, reduces the cost of production, ensure profitability, which also increases the different vegetative growth, early maturity, and the development of the crop and this is also applicable to some crops like maize, sweet potatoes. While the manure of the poultry products are being incorporated to fish ponds and the cows dungs are being used to serve as some manure to some certain crops, e.g. Okro, Maize, Pepper. Which provides the necessary nutritious values, but the various number of quantity of different bags of the various application are lacking, and the time of usage, it is also a life germane questions, which there are needs for further adaptive research, that will be involved and the reintroduction of new technology, that will be used in terms of the different methodology such as broadcasting and ring applications, of the dungs at large, while the seasons of the various applications. Thus the paper, therefore, suggested a training programs and production of manuals that will guide the various applications and usage and the effective dissemination of the various used of the simple technology, that will advances and teaching of a new mode of and the time of applications and the various quantity to used, during the applications.Keywords: animals, usage, livestock, dungs, feaces, gwagawalada
Procedia PDF Downloads 17814529 Understanding Regional Circulations That Modulate Heavy Precipitations in the Kulfo Watershed
Authors: Tesfay Mekonnen Weldegerima
Abstract:
Analysis of precipitation time series is a fundamental undertaking in meteorology and hydrology. The extreme precipitation scenario of the Kulfo River watershed is studied using wavelet analysis and atmospheric transport, a lagrangian trajectory model. Daily rainfall data for the 1991-2020 study periods are collected from the office of the Ethiopian Meteorology Institute. Meteorological fields on a three-dimensional grid at 0.5o x 0.5o spatial resolution and daily temporal resolution are also obtained from the Global Data Assimilation System (GDAS). Wavelet analysis of the daily precipitation processed with the lag-1 coefficient reveals some high power recurred once every 38 to 60 days with greater than 95% confidence for red noise. The analysis also identified inter-annual periodicity in the periods 2002 - 2005 and 2017 - 2019. Back trajectory analysis for 3-day periods up to May 19/2011, indicates the Indian Ocean source; trajectories crossed the eastern African escarpment to arrive at the Kulfo watershed. Atmospheric flows associated with the Western Indian monsoon redirected by the low-level Somali winds and Arabian ridge are responsible for the moisture supply. The time-localization of the wavelet power spectrum yields valuable hydrological information, and the back trajectory approaches provide useful characterization of air mass source.Keywords: extreme precipitation events, power spectrum, back trajectory, kulfo watershed
Procedia PDF Downloads 7014528 Tunable Control of Therapeutics Release from the Nanochannel Delivery System (nDS)
Authors: Thomas Geninatti, Bruno Giacomo, Alessandro Grattoni
Abstract:
Nanofluidic devices have been investigated for over a decade as promising platforms for the controlled release of therapeutics. The nanochannel drug delivery system (nDS), a membrane fabricated with high precision silicon techniques, capable of zero-order release of drugs by exploiting diffusion transport at the nanoscale originated from the interactions between molecules with nanochannel surfaces, showed the flexibility of the sustained release in vitro and in vivo, over periods of time ranging from weeks to months. To improve the implantable bio nanotechnology, in order to create a system that possesses the key features for achieve the suitable release of therapeutics, the next generation of nDS has been created. Platinum electrodes are integrated by e-beam deposition onto both surfaces of the membrane allowing low voltage (<2 V) and active temporal control of drug release through modulation of electrostatic potentials at the inlet and outlet of the membrane’s fluidic channels. Hence, a tunable administration of drugs is ensured from the nanochannel drug delivery system. The membrane will be incorporated into a peek implantable capsule, which will include drug reservoir, control hardware and RF system to allow suitable therapeutic regimens in real-time. Therefore, this new nanotechnology offers tremendous potential solutions to manage chronic disease such as cancer, heart disease, circadian dysfunction, pain and stress.Keywords: nanochannel membrane, drug delivery, tunable release, personalized administration, nanoscale transport, biomems
Procedia PDF Downloads 31514527 Effectiveness of Intraoperative Heparinization in Neonatal and Pediatric Patients with Congenital Heart Diseases: Focus in Heparin Resistance
Authors: Karakhalis N. B.
Abstract:
This study aimed to determine the prevalence of heparin resistance among cardiac surgical pediatric and neonatal patients and identify associated risk factors. Materials and Methods: The study included 306 pediatric and neonatal patients undergoing on-pump cardiac surgery. Patients whose activated clotting time (ACT) targets were achieved after the first administration of heparin formed the 1st group (n=280); the 2nd group (n=26) included patients with heparin resistance. The initial assessment of the haemostasiological profile included determining the PT, aPPT, FG, AT III activity, and INR. Intraoperative control of heparinization was carried out with a definition of ACT using a kaolin activator. A weight-associated protocol at the rate of 300 U/kg with target values of ACT >480 sec was used for intraoperative heparinization. Results: The heparin resistance was verified in 8.5% of patients included in the study. Repeated heparin administration at the maximum dose of≥600 U/kg is required in 80.77% of cases. Despite additional heparinization, 19.23% of patients had FFP infusion. There was reduced antithrombin activity in the heparin resistance group (p=0.01). Most patients with heparin resistance (57.7%) were pretreated with low molecular weight heparins during the preoperative period. Conclusion: Determining the initial level of antithrombin activity can predict the risk of developing heparin resistance. The factor analysis verified hidden risk factors for heparin resistance to the heparin pretreatment, chronic hypoxia, and chronic heart failure.Keywords: congenital heart disease, heparin, antithrombin, activated clotting time, heparin resistance
Procedia PDF Downloads 8214526 Therapeutic Potential of GSTM2-2 C-Terminal Domain and Its Mutants, F157A and Y160A on the Treatment of Cardiac Arrhythmias: Effect on Ca2+ Transients in Neonatal Ventricular Cardiomyocytes
Authors: R. P. Hewawasam, A. F. Dulhunty
Abstract:
The ryanodine receptor (RyR) is an intracellular ion channel that releases Ca2+ from the sarcoplasmic reticulum and is essential for the excitation-contraction coupling and contraction in striated muscle. Human muscle specific glutathione transferase M2-2 (GSTM2-2) is a highly specific inhibitor of cardiac ryanodine receptor (RyR2) activity. Single channel-lipid bilayer studies and Ca2+ release assays performed using the C-terminal half of the GSTM2-2 and its mutants F157A and Y160A confirmed the ability of the C terminal domain of GSTM2-2 to specifically inhibit the cardiac ryanodine receptor activity. Objective of the present study is to determine the effect of C terminal domain of GSTM2-2 (GSTM2-2C) and the mutants, F157A and Y160A on the Ca2+ transients of neonatal ventricular cardiomyocytes. Primary cardiomyocytes were cultured from neonatal rats. They were treated with GSTM2-2C and the two mutants F157A and Y160A at 15µM and incubated for 2 hours. Then the cells were led with Fluo-4AM, fluorescent Ca2+ indicator, and the field stimulated (1 Hz, 3V and 2ms) cells were excited using the 488 nm argon laser. Contractility of the cells were measured and the Ca2+ transients in the stained cells were imaged using Leica SP5 confocal microscope. Peak amplitude of the Ca2+ transient, rise time and decay time from the peak were measured for each transient. In contrast to GSTM2C which significantly reduced the % shortening (42.8%) in the field stimulated cells, F157A and Y160A failed to reduce the % shortening.Analysis revealed that the average amplitude of the Ca2+ transient was significantly reduced (P<0.001) in cells treated with the wild type GSTM2-2C compared to that of untreated cells. Cells treated with the mutants F157A and Y160A didn’t change the Ca2+ transient significantly compared to the control. A significant increase in the rise time (P< 0.001) and a significant reduction in the decay time (P< 0.001) were observed in cardiomyocytes treated with GSTM2-2C compared to the control but not with F157A and Y160A. These results are consistent with the observation that GSTM2-2C reduced the Ca2+ release from the cardiac SR significantly whereas the mutants, F157A and Y160A didn’t show any effect compared to the control. GSTM2-2C has an isoform-specific effect on the cardiac ryanodine receptor activity and also it inhibits RyR2 channel activity only during diastole. Selective inhibition of RyR2 by GSTM2-2C has significant clinical potential in the treatment of cardiac arrhythmias and heart failure. Since GSTM2-2C-terminal construct has no GST enzyme activity, its introduction to the cardiomyocyte would not exert any unwanted side effects that may alter its enzymatic action. The present study further confirms that GSTM2-2C is capable of decreasing the Ca2+ release from the cardiac SR during diastole. These results raise the future possibility of using GSTM2-2C as a template for therapeutics that can depress RyR2 function when the channel is hyperactive in cardiac arrhythmias and heart failure.Keywords: arrhythmia, cardiac muscle, cardiac ryanodine receptor, GSTM2-2
Procedia PDF Downloads 28414525 Kinesio Taping in Treatment Patients with Intermittent Claudication
Authors: Izabela Zielinska
Abstract:
Kinesio Taping is classified as physiotherapy method supporting rehabilitation and modulating some physiological processes. It is commonly used in sports medicine and orthopedics. This sensory method has influence on muscle function, pain sensation, intensifies lymphatic system as well as improves microcirculation. The aim of this study was to assess the effect of Kinesio Taping in patients with ongoing treatment of peripheral artery disease (PAD). The study group comprised 60 patients (stadium II B at Fontain's scale). All patients were divided into two groups (30 person/each), where 12 weeks long treadmill training was administrated. In the second group, the Kinesio Taping was applied to support the function of the gastrocnemius muscle. The measurements of distance and time until claudication pain, blood flow of arteries in lower limbs and ankle brachial index were taken under evaluation. Examination performed after Kinesio Taping therapy showed statistically significant increase in gait parameters and muscle strength in patients with intermittent claudication. The Kinesio Taping method has clinically significant effects on enhancement of pain-free distance and time until claudication pain in patients with peripheral artery disease. Kinesio Taping application can be used to support non-invasive treatment in patients with intermittent claudication. Kinesio Taping can be employed as an alternative way of therapy for patients with orthopedic or cardiac contraindications to be treated with treadmill training.Keywords: intermittent claudication, kinesiotaping, peripheral artery disease, treadmill training
Procedia PDF Downloads 205