Search results for: and additional trim required
1085 Money Laundering Risk Assessment in the Banking Institutions: An Experimental Approach
Authors: Yusarina Mat-Isa, Zuraidah Mohd-Sanusi, Mohd-Nizal Haniff, Paul A. Barnes
Abstract:
In view that money laundering has become eminent for banking institutions, it is an obligation for the banking institutions to adopt a risk-based approach as the integral component of the accepted policies on anti-money laundering. In doing so, those involved with the banking operations are the most critical group of personnel as these are the people who deal with the day-to-day operations of the banking institutions and are obligated to form a judgement on the level of impending risk. This requirement is extended to all relevant banking institutions staff, such as tellers and customer account representatives for them to identify suspicious customers and escalate it to the relevant authorities. Banking institutions staffs, however, face enormous challenges in identifying and distinguishing money launderers from other legitimate customers seeking genuine banking transactions. Banking institutions staffs are mostly educated and trained with the business objective in mind to serve the customers and are not trained to be “detectives with a detective’s power of observation”. Despite increasing awareness as well as trainings conducted for the banking institutions staff, their competency in assessing money laundering risk is still insufficient. Several gaps have prompted this study including the lack of behavioural perspectives in the assessment of money laundering risk in the banking institutions. Utilizing experimental approach, respondents are randomly assigned within a controlled setting with manipulated situations upon which judgement of the respondents is solicited based on various observations related to the situations. The study suggests that it is imperative that informed judgement is exercised in arriving at the decision to proceed with the banking services required by the customers. Judgement forms a basis of opinion for the banking institution staff to decide if the customers posed money laundering risk. Failure to exercise good judgement could results in losses and absorption of unnecessary risk into the banking institutions. Although the banking institutions are exposed with choices of automated solutions in assessing money laundering risk, the human factor in assessing the risk is indispensable. Individual staff in the banking institutions is the first line of defence who are responsible for screening the impending risk of any customer soliciting for banking services. At the end of the spectrum, the individual role involvement on the subject of money laundering risk assessment is not a substitute for automated solutions as human judgement is inimitable.Keywords: banking institutions, experimental approach, money laundering, risk assessment
Procedia PDF Downloads 2671084 The Need for Automation in the Domestic Food Processing Sector and its Impact
Authors: Shantam Gupta
Abstract:
The objective of this study is to address the critical need for automation in the domestic food processing sector and study its impact. Food is the one of the most basic physiological needs essential for the survival of a living being. Some of them have the capacity to prepare their own food (like most plants) and henceforth are designated as primary food producers; those who depend on these primary food producers for food form the primary consumers’ class (herbivores). Some of the organisms relying on the primary food are the secondary food consumers (carnivores). There is a third class of consumers called tertiary food consumers/apex food consumers that feed on both the primary and secondary food consumers. Humans form an essential part of the apex predators and are generally at the top of the food chain. But still further disintegration of the food habits of the modern human i.e. Homo sapiens, reveals that humans depend on other individuals for preparing their own food. The old notion of eating raw/brute food is long gone and food processing has become very trenchant in lives of modern human. This has led to an increase in dependence on other individuals for ‘processing’ the food before it can be actually consumed by the modern human. This has led to a further shift of humans in the classification of food chain of consumers. The effects of the shifts shall be systematically investigated in this paper. The processing of food has a direct impact on the economy of the individual (consumer). Also most individuals depend on other processing individuals for the preparation of food. This dependency leads to establishment of a vital link of dependency in the food web which when altered can adversely affect the food web and can have dire consequences on the health of the individual. This study investigates the challenges arising out due to this dependency and the impact of food processing on the economy of the individual. A comparison of Industrial food processing and processing at domestic platforms (households and restaurants) has been made to provide an idea about the present scenario of automation in the food processing sector. A lot of time and energy is also consumed while processing food at home for consumption. The high frequency of consumption of meals (greater than 2 times a day) makes it even more laborious. Through the medium of this study a pressing need for development of an automatic cooking machine is proposed with a mission to reduce the inter-dependency & human effort of individuals required for the preparation of food (by automation of the food preparation process) and make them more self-reliant The impact of development of this product has also further been profoundly discussed. Assumption used: The individuals those who process food also consume the food that they produce. (They are also termed as ‘independent’ or ‘self-reliant’ modern human beings.)Keywords: automation, food processing, impact on economy, processing individual
Procedia PDF Downloads 4701083 Evaluation of Natural Waste Materials for Ammonia Removal in Biofilters
Authors: R. F. Vieira, D. Lopes, I. Baptista, S. A. Figueiredo, V. F. Domingues, R. Jorge, C. Delerue-matos, O. M. Freitas
Abstract:
Odours are generated in municipal solid wastes management plants as a result of decomposition of organic matter, especially when anaerobic degradation occurs. Information was collected about the substances and respective concentration in the surrounding atmosphere of some management plants. The main components which are associated with these unpleasant odours were identified: ammonia, hydrogen sulfide and mercaptans. The first is the most common and the one that presents the highest concentrations, reaching values of 700 mg/m3. Biofiltration, which involves simultaneously biodegradation, absorption and adsorption processes, is a sustainable technology for the treatment of these odour emissions when a natural packing material is used. The packing material should ideally be cheap, durable, and allow the maximum microbiological activity and adsorption/absorption. The presence of nutrients and water is required for biodegradation processes. Adsorption and absorption are enhanced by high specific surface area, high porosity and low density. The main purpose of this work is the exploitation of natural waste materials, locally available, as packing media: heather (Erica lusitanica), chestnut bur (from Castanea sativa), peach pits (from Prunus persica) and eucalyptus bark (from Eucalyptus globulus). Preliminary batch tests of ammonia removal were performed in order to select the most interesting materials for biofiltration, which were then characterized. The following physical and chemical parameters were evaluated: density, moisture, pH, buffer and water retention capacity. The determination of equilibrium isotherms and the adjustment to Langmuir and Freundlich models was also performed. Both models can fit the experimental results. Based both in the material performance as adsorbent and in its physical and chemical characteristics, eucalyptus bark was considered the best material. It presents a maximum adsorption capacity of 0.78±0.45 mol/kg for ammonia. The results from its characterization are: 121 kg/m3 density, 9.8% moisture, pH equal to 5.7, buffer capacity of 0.370 mmol H+/kg of dry matter and water retention capacity of 1.4 g H2O/g of dry matter. The application of natural materials locally available, with little processing, in biofiltration is an economic and sustainable alternative that should be explored.Keywords: ammonia removal, biofiltration, natural materials, odour control
Procedia PDF Downloads 3691082 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow
Authors: Masood Otarod, Ronald M. Supkowski
Abstract:
This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations
Procedia PDF Downloads 2691081 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost
Authors: German Osma, Gabriel Ordonez
Abstract:
The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling
Procedia PDF Downloads 1701080 The Use of Videos: Effects on Children's Language and Literacy Skills
Authors: Rahimah Saimin
Abstract:
Previous research has shown that young children can learn from educational television programmes, videos or other technological media. However, the blending of any of these with traditional printed-based text appears to be omitted. Repeated viewing is an important factor in children's ability to comprehend the content or plot. The present study combined videos with traditional printed-based text and required repeated viewing and is original and distinctive. The first study was a pilot study to explore whether the intervention is implementable in ordinary classrooms. The second study explored whether the curricular embedding is important or whether the video with curricular embedding is effective. The third study explored the effect of “dosage”, i.e. whether a longer/ more intense intervention has a proportionately greater effect on outcomes. Both measured outcomes (comprehension, word sounds, and early word recognition) and unmeasured outcomes (engagement to reading traditional printed-based texts or/and multimodal texts) were obtained from this study. Observation indicated degree of engagement in reading. The theoretical framework was multimodality theory combined with Piaget’s and Vygotsky’s learning theories. An experimental design was used with 4-5-year-old children in nursery schools and primary schools. Six links to video clips exploring non-fiction science content were provided to teachers. The first session is whole-class and subsequent sessions small-group. The teacher then engaged the children in dialogue using supplementary materials. About half of each class was selected randomly for pre-post assessments. Two assessments were used the British Picture Vocabulary Scale (BPVSIII) and the York Assessment of Reading for Comprehension (YARC): Early Reading. Different programme fidelity means were deployed- observations, teacher self-reports attendance logs and post-delivery interviews. Data collection is in progress and results will be available shortly. If this multiphase study show effectiveness in one or other application, then teachers will have other tools which they can use to enhance vocabulary, letter knowledge and word reading. This would be a valuable addition to their repertoire.Keywords: language skills, literacy skills, multimodality, video
Procedia PDF Downloads 3371079 Experimental Study of Nucleate Pool Boiling Heat Transfer Characteristics on Laser-Processed Copper Surfaces of Different Patterns
Authors: Luvindran Sugumaran, Mohd Nashrul Mohd Zubir, Kazi Md Salim Newaz, Tuan Zaharinie Tuan Zahari, Suazlan Mt Aznam, Aiman Mohd Halil
Abstract:
With the fast growth of integrated circuits and the trend towards making electronic devices smaller, the heat dissipation load of electronic devices has continued to go over the limit. The high heat flux element would not only harm the operation and lifetime of the equipment but would also impede the performance upgrade brought about by the iteration of technological updates, which would have a direct negative impact on the economic and production cost benefits of rising industries. Hence, in high-tech industries like radar, information and communication, electromagnetic power, and aerospace, the development and implementation of effective heat dissipation technologies were urgently required. Pool boiling is favored over other cooling methods because of its capacity to dissipate a high heat flux at a low wall superheat without the usage of mechanical components. Enhancing the pool boiling performance by increasing the heat transfer coefficient via surface modification techniques has received a lot of attention. There are several surface modification methods feasible today, but the stability and durability of surface modification are the greatest priority. Thus, laser machining is an interesting choice for surface modification due to its low production cost, high scalability, and repeatability. In this study, different patterns of laser-processed copper surfaces are fabricated to investigate the nucleate pool boiling heat transfer performance of distilled water. The investigation showed that there is a significant enhancement in the pool boiling heat transfer performance of the laser-processed surface compared to the reference surface due to the notable increase in nucleation frequency and nucleation site density. It was discovered that the heat transfer coefficients increased when both the surface area ratio and the ratio of peak-to-valley height of the microstructure were raised. It is believed that the development of microstructures on the surface as a result of laser processing is the primary factor in the enhancement of heat transfer performance.Keywords: heat transfer coefficient, laser processing, micro structured surface, pool boiling
Procedia PDF Downloads 881078 Applying the Underwriting Technique to Analyze and Mitigate the Credit Risks in Construction Project Management
Authors: Hai Chien Pham, Thi Phuong Anh Vo, Chansik Park
Abstract:
Risks management in construction projects is important to ensure the positive feasibility of the projects in which financial risks are most concerned while construction projects always run on a credit basis. Credit risks, therefore, require unique and technical tools to be well managed. Underwriting technique in credit risks, in its most basic sense, refers to the process of evaluating the risks and the potential exposure of losses. Risks analysis and underwriting are applied as a must in banks and financial institutions who are supporters for constructions projects when required. Recently, construction organizations, especially contractors, have recognized the significant increasing of credit risks which caused negative impacts to project performance and profit of construction firms. Despite the successful application of underwriting in banks and financial institutions for many years, there are few contractors who are applying this technique to analyze and mitigate the credit risks of their potential owners before signing contracts with them for delivering their performed services. Thus, contractors have taken credit risks during project implementation which might be not materialized due to the bankruptcy and/or protracted default made by their owners. With this regard, this study proposes a model using the underwriting technique for contractors to analyze and assess credit risks of their owners before making final decisions for the potential construction contracts. Contractor’s underwriters are able to analyze and evaluate the subjects such as owner, country, sector, payment terms, financial figures and their related concerns of the credit limit requests in details based on reliable information sources, and then input into the proposed model to have the Overall Assessment Score (OAS). The OAS is as a benchmark for the decision makers to grant the proper limits for the project. The proposed underwriting model is validated by 30 subjects in Asia Pacific region within 5 years to achieve their OAS, and then compare output OAS with their own practical performance in order to evaluate the potential of underwriting model for analyzing and assessing credit risks. The results revealed that the underwriting would be a powerful method to assist contractors in making precise decisions. The contribution of this research is to allow the contractors firstly to develop their own credit risk management model for proactively preventing the credit risks of construction projects and continuously improve and enhance the performance of this function during project implementation.Keywords: underwriting technique, credit risk, risk management, construction project
Procedia PDF Downloads 2081077 Antioxidant Potency of Ethanolic Extracts from Selected Aromatic Plants by in vitro Spectrophotometric Analysis
Authors: Tatjana Kadifkova Panovska, Svetlana Kulevanova, Blagica Jovanova
Abstract:
Biological systems possess the ability to neutralize the excess of reactive oxygen species (ROS) and to protect cells from destructive alterations. However, many pathological conditions (cardiovascular diseases, autoimmune disorders, cancer) are associated with inflammatory processes that generate an excessive amount of reactive oxygen species (ROS) that shift the balance between endogenous antioxidant systems and free oxygen radicals in favor of the latter, leading to oxidative stress. Therefore, an additional source of natural compounds with antioxidant properties that will reduce the amount of ROS in cells is much needed despite their broad utilization; many plant species remain largely unexplored. Therefore, the purpose of the present study is to investigate the antioxidant activity of twenty-five selected medicinal and aromatic plant species. The antioxidant activity of the ethanol extracts was evaluated with in vitro assays: 2,2’-diphenyl-1-pycryl-hydrazyl (DPPH), ferric reducing antioxidant power (FRAP), non-site-specific- (NSSOH) and site-specific hydroxyl radical-2-deoxy-D-ribose degradation (SSOH) assays. The Folin-Ciocalteu method and AlCl3 method were performed to determine total phenolic content (TPC) and total flavonoid content (TFC). All examined plant extracts manifested antioxidant activity to a different extent. Cinnamomum verum J.Presl bark and Ocimum basilicum L. Herba demonstrated strong radical scavenging activity and reducing power with the DPPH and FRAP assay, respectively. Additionally, significant hydroxyl scavenging potential and metal chelating properties were observed using the NSSOH and SSOH assays. Furthermore, significant variations were determined in the total polyphenolic content (TPC) and total flavonoid content (TFC), with Cinnamomum verum and Ocimum basilicum showing the highest amount of total polyphenols. The considerably strong radical scavenging activity, hydroxyl scavenging potential and reducing power for the species mentioned above suggest of a presence of highly bioactive phytochemical compounds, predominantly polyphenols. Since flavonoids are the most abundant group of polyphenols that possess a large number of available reactive OH groups in their structure, it is considered that they are the main contributors to the radical scavenging properties of the examined plant extracts. This observation is supported by the positive correlation between the radical scavenging activity and the total polyphenolic and flavonoid content obtained in the current research. The observations from the current research nominate Cinnamomum verum bark and Ocimum basilicum herba as potential sources of bioactive compounds that could be utilized as antioxidative additives in the food and pharmaceutical industries. Moreover, the present study will help the researchers as basic data for future research in exploiting the hidden potential of these important plants that have not been explored so far.Keywords: ethanol extracts, radical scavenging activity, reducing power, total polyphenols.
Procedia PDF Downloads 1991076 Artificial Intelligence for Traffic Signal Control and Data Collection
Authors: Reggie Chandra
Abstract:
Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal
Procedia PDF Downloads 1691075 Kinetic Energy Recovery System Using Spring
Authors: Mayuresh Thombre, Prajyot Borkar, Mangirish Bhobe
Abstract:
New advancement of technology and never satisfying demands of the civilization are putting huge pressure on the natural fuel resources and these resources are at a constant threat to its sustainability. To get the best out of the automobile, the optimum balance between performance and fuel economy is important. In the present state of art, either of the above two aspects are taken into mind while designing and development process which puts the other in the loss as increase in fuel economy leads to decrement in performance and vice-versa. In-depth observation of the vehicle dynamics apparently shows that large amount of energy is lost during braking and likewise large amount of fuel is consumed to reclaim the initial state, this leads to lower fuel efficiency to gain the same performance. Current use of Kinetic Energy Recovery System is only limited to sports vehicles only because of the higher cost of this system. They are also temporary in nature as power can be squeezed only during a small time duration and use of superior parts leads to high cost, which results on concentration on performance only and neglecting the fuel economy. In this paper Kinetic Energy Recovery System for storing the power and then using the same while accelerating has been discussed. The major storing element in this system is a Flat Spiral Spring that will store energy by compression and torsion. The use of spring ensure the permanent storage of energy until used by the driver unlike present mechanical regeneration system in which the energy stored decreases with time and is eventually lost. A combination of internal gears and spur gears will be used in order to make the energy release uniform which will lead to safe usage. The system can be used to improve the fuel efficiency by assisting in overcoming the vehicle’s inertia after braking or to provide instant acceleration whenever required by the driver. The performance characteristics of the system including response time, mechanical efficiency and overall increase in efficiency are demonstrated. This technology makes the KERS (Kinetic Energy Recovery System) more flexible and economical allowing specific application while at the same time increasing the time frame and ease of usage.Keywords: electric control unit, energy, mechanical KERS, planetary gear system, power, smart braking, spiral spring
Procedia PDF Downloads 2011074 Specification Requirements for a Combined Dehumidifier/Cooling Panel: A Global Scale Analysis
Authors: Damien Gondre, Hatem Ben Maad, Abdelkrim Trabelsi, Frédéric Kuznik, Joseph Virgone
Abstract:
The use of a radiant cooling solution would enable to lower cooling needs which is of great interest when the demand is initially high (hot climate). But, radiant systems are not naturally compatibles with humid climates since a low-temperature surface leads to condensation risks as soon as the surface temperature is close to or lower than the dew point temperature. A radiant cooling system combined to a dehumidification system would enable to remove humidity for the space, thereby lowering the dew point temperature. The humidity removal needs to be especially effective near the cooled surface. This requirement could be fulfilled by a system using a single desiccant fluid for the removal of both excessive heat and moisture. This task aims at providing an estimation of the specification requirements of such system in terms of cooling power and dehumidification rate required to fulfill comfort issues and to prevent any condensation risk on the cool panel surface. The present paper develops a preliminary study on the specification requirements, performances and behavior of a combined dehumidifier/cooling ceiling panel for different operating conditions. This study has been carried using the TRNSYS software which allows nodal calculations of thermal systems. It consists of the dynamic modeling of heat and vapor balances of a 5m x 3m x 2.7m office space. In a first design estimation, this room is equipped with an ideal heating, cooling, humidification and dehumidification system so that the room temperature is always maintained in between 21◦C and 25◦C with a relative humidity in between 40% and 60%. The room is also equipped with a ventilation system that includes a heat recovery heat exchanger and another heat exchanger connected to a heat sink. Main results show that the system should be designed to meet a cooling power of 42W.m−2 and a desiccant rate of 45 gH2O.h−1. In a second time, a parametric study of comfort issues and system performances has been achieved on a more realistic system (that includes a chilled ceiling) under different operating conditions. It enables an estimation of an acceptable range of operating conditions. This preliminary study is intended to provide useful information for the system design.Keywords: dehumidification, nodal calculation, radiant cooling panel, system sizing
Procedia PDF Downloads 1751073 The Mental Workload of Intensive Care Unit Nurses in Performing Human-Machine Tasks: A Cross-Sectional Survey
Authors: Yan Yan, Erhong Sun, Lin Peng, Xuchun Ye
Abstract:
Aims: The present study aimed to explore Intensive Care Unit (ICU) nurses’ mental workload (MWL) and associated factors with it in performing human-machine tasks. Background: A wide range of emerging technologies have penetrated widely in the field of health care, and ICU nurses are facing a dramatic increase in nursing human-machine tasks. However, there is still a paucity of literature reporting on the general MWL of ICU nurses performing human-machine tasks and the associated influencing factors. Methods: A cross-sectional survey was employed. The data was collected from January to February 2021 from 9 tertiary hospitals in 6 provinces (Shanghai, Gansu, Guangdong, Liaoning, Shandong, and Hubei). Two-stage sampling was used to recruit eligible ICU nurses (n=427). The data were collected with an electronic questionnaire comprising sociodemographic characteristics and the measures of MWL, self-efficacy, system usability, and task difficulty. The univariate analysis, two-way analysis of variance (ANOVA), and a linear mixed model were used for data analysis. Results: Overall, the mental workload of ICU nurses in performing human-machine tasks was medium (score 52.04 on a 0-100 scale). Among the typical nursing human-machine tasks selected, the MWL of ICU nurses in completing first aid and life support tasks (‘Using a defibrillator to defibrillate’ and ‘Use of ventilator’) was significantly higher than others (p < .001). And ICU nurses’ MWL in performing human-machine tasks was also associated with age (p = .001), professional title (p = .002), years of working in ICU (p < .001), willingness to study emerging technology actively (p = .006), task difficulty (p < .001), and system usability (p < .001). Conclusion: The MWL of ICU nurses is at a moderate level in the context of a rapid increase in nursing human-machine tasks. However, there are significant differences in MWL when performing different types of human-machine tasks, and MWL can be influenced by a combination of factors. Nursing managers need to develop intervention strategies in multiple ways. Implications for practice: Multidimensional approaches are required to perform human-machine tasks better, including enhancing nurses' willingness to learn emerging technologies actively, developing training strategies that vary with tasks, and identifying obstacles in the process of human-machine system interaction.Keywords: mental workload, nurse, ICU, human-machine, tasks, cross-sectional study, linear mixed model, China
Procedia PDF Downloads 691072 Ship Roll Reduction Using Water-Flow Induced Coriolis Effect
Authors: Mario P. Walker, Masaaki Okuma
Abstract:
Ships are subjected to motions which can disrupt on-board operations and damage equipment. Roll motion, in particular, is of great interest due to low damping conditions which may lead to capsizing. Therefore finding ways to reduce this motion is important in ship designs. Several techniques have been investigated to reduce rolling. These include the commonly used anti-roll tanks, fin stabilizers and bilge keels. However, these systems are not without their challenges. For example, water-flow in anti-roll tanks creates complications, and for fin stabilizers and bilge keels, an extremely large size is required to produce any significant damping creating operational challenges. Additionally, among these measures presented above only anti-roll tanks are effective in zero forward motion of the vessels. This paper proposes and investigates a method to reduce rolling by inducing Coriolis effect using water-flow in the radial direction. Motion in the radial direction of a rolling structure will induce Coriolis force and, depending on the direction of flow will either amplify or attenuate the structure. The system is modelled with two degrees of freedom, having rotational motion for parametric rolling and radial motion of the water-flow. Equations of motion are derived and investigated. Numerical examples are analyzed in detail. To demonstrate applicability parameters from a Ro-Ro vessel are used as extensive research have been conducted on these over the years. The vessel is investigated under free and forced roll conditions. Several models are created using various masses, heights, and velocities of water-flow at a given time. The proposed system was found to produce substantial roll reduction which increases with increase in any of the parameters varied as stated above, with velocity having the most significant effect. The proposed system provides a simple approach to reduce ship rolling. Water-flow control is very simple as the water flows in only one direction with constant velocity. Only needing to control the time at which the system should be turned on or off. Furthermore, the proposed system is effective in both forward and zero forward motion of the ship, and provides no hydrodynamic drag. This is a starting point for designing an effective and practical system. For this to be a viable approach further investigations are needed to address challenges that present themselves.Keywords: Coriolis effect, damping, rolling, water-flow
Procedia PDF Downloads 4501071 Modernization of Translation Studies Curriculum at Higher Education Level in Armenia
Authors: A. Vahanyan
Abstract:
The paper touches upon the problem of revision and modernization of the current curriculum on translation studies at the Armenian Higher Education Institutions (HEIs). In the contemporary world where quality and speed of services provided are mostly valued, certain higher education centers in Armenia though do not demonstrate enough flexibility in terms of the revision and amendment of courses taught. This issue is present for various curricula at the university level and Translation Studies related curriculum, in particular. Technological innovations that are of great help for translators have been long ago smoothly implemented into the global Translation Industry. According to the European Master's in Translation (EMT) framework, translation service provision comprises linguistic, intercultural, information mining, thematic, and technological competencies. Therefore, to form the competencies mentioned above, the curriculum should be seriously restructured to meet the modern education and job market requirements, relevant courses should be proposed. New courses, in particular, should focus on the formation of technological competences. These suggestions have been made upon the author’s research of the problem across various HEIs in Armenia. The updated curricula should include courses aimed at familiarization with various computer-assisted translation (CAT) tools (MemoQ, Trados, OmegaT, Wordfast, etc.) in the translation process, creation of glossaries and termbases compatible with different platforms), which will ensure consistency in translation of similar texts and speeding up the translation process itself. Another aspect that may be strengthened via curriculum modification is the introduction of interdisciplinary and Project-Based Learning courses, which will enable info mining and thematic competences, which are of great importance as well. Of course, the amendment of the existing curriculum with the mentioned courses will require corresponding faculty development via training, workshops, and seminars. Finally, the provision of extensive internship with translation agencies is strongly recommended as it will ensure the synthesis of theoretical background and practical skills highly required for the specific area. Summing up, restructuring and modernization of the existing curricula on Translation Studies should focus on three major aspects, i.e., introduction of new courses that meet the global quality standards of education, professional development for faculty, and integration of extensive internship supervised by experts in the field.Keywords: competencies, curriculum, modernization, technical literacy, translation studies
Procedia PDF Downloads 1311070 A Concept Study to Assist Non-Profit Organizations to Better Target Developing Countries
Authors: Malek Makki
Abstract:
The main purpose of this research study is to assist non-profit organizations (NPOs) to better segment a group of least developing countries and to optimally target the most needier areas, so that the provided aids make positive and lasting differences. We applied international marketing and strategy approaches to segment a sub-group of candidates among a group of 151 countries identified by the UN-G77 list, and furthermore, we point out the areas of priorities. We use reliable and well known criteria on the basis of economics, geography, demography and behavioral. These criteria can be objectively estimated and updated so that a follow-up can be performed to measure the outcomes of any program. We selected 12 socio-economic criteria that complement each other: GDP per capita, GDP growth, industry value added, export per capita, fragile state index, corruption perceived index, environment protection index, ease of doing business index, global competitiveness index, Internet use, public spending on education, and employment rate. A weight was attributed to each variable to highlight the relative importance of each criterion within the country. Care was taken to collect the most recent available data from trusted well-known international organizations (IMF, WB, WEF, and WTO). Construct of equivalence was carried out to compare the same variables across countries. The combination of all these weighted estimated criteria provides us with a global index that represents the level of development per country. An absolute index that combines wars and risks was introduced to exclude or include a country on the basis of conflicts and a collapsing state. The final step applied to the included countries consists of a benchmarking method to select the segment of countries and the percentile of each criterion. The results of this study allowed us to exclude 16 countries for risks and security. We also excluded four countries because they lack reliable and complete data. The other countries were classified per percentile thru their global index, and we identified the needier and the areas where aids are highly required to help any NPO to prioritize the area of implementation. This new concept is based on defined, actionable, accessible and accurate variables by which NPO can implement their program and it can be extended to profit companies to perform their corporate social responsibility acts.Keywords: developing countries, international marketing, non-profit organization, segmentation
Procedia PDF Downloads 3021069 HIV-1 Nef Mediates Host Invasion by Differential Expression of Alpha-Enolase
Authors: Reshu Saxena, R. K. Tripathi
Abstract:
HIV-1 transmission and spread involves significant host-virus interaction. Potential targets for prevention of HIV-1 lies at the site of mucosal barriers. Thus a better understanding of how HIV-1 infects target cells at such sites and lead their invasion is required, with prime focus on the host determinants regulating HIV-1 spread. HIV-1 Nef is important for viral infectivity and pathogenicity. It promotes HIV-1 replication, facilitating immune evasion by interacting with various host factors and altering cellular pathways via multiple protein-protein interactions. In this study nef was sequenced from HIV-1 patients, and showed specific mutations revealing sequence variability in nef. To explore the difference in Nef functionality based on sequence variability we have studied the effects of HIV-1 Nef in human SupT1 T cell line and (THP-1) monocyte-macrophage cell lines through proteomics approach. 2D-Gel Electrophoresis in control and Nef-transfected SupT1 cells demonstrated several differentially expressed proteins with significant modulation of alpha-enolase. Through further studies, effects of Nef on alpha-enolase regulation were found to be cell lineage-specific, being stimulatory in macrophages/monocytes, inhibitory in T cells and without effect in HEK-293 cells. Cell migration and invasion studies were employed to determine biological function affected by Nef mediated regulation of alpha-enolase. Cell invasion was enhanced in THP-1 cells but was inhibited in SupT1 cells by wildtype nef. In addition, the modulation of enolase and cell invasion remained unaffected by a unique nef variant. These results indicated that regulation of alpha-enolase expression and invasive property of host cells by Nef is sequence specific, suggesting involvement of a particular motif of Nef. To precisely determine this site, we designed a heptapeptide including the suggested alpha-enolase regulating sequence of nef and a nef mutant with deletion of this site. Macrophages/monocytes being the major cells affected by HIV-1 at mucosal barriers, were particularly investigated by the nef mutant and peptide. Both the nef mutant and heptapeptide led to inhibition of enhanced enolase expression and increased invasiveness in THP-1 cells. Together, these findings suggest a possible mechanism of host invasion by HIV-1 through Nef mediated regulation of alpha-enolase and identifies a potential therapeutic target for HIV-1 entry at mucosal barriers.Keywords: HIV-1 Nef, nef variants, host-virus interaction, tissue invasion
Procedia PDF Downloads 4111068 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area
Authors: Michelle Eliane Hernández-García, Angélica Lozano
Abstract:
The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks
Procedia PDF Downloads 1301067 Challenging Convections: Rethinking Literature Review Beyond Citations
Authors: Hassan Younis
Abstract:
Purpose: The objective of this study is to review influential papers in the sustainability and supply chain studies domain, leveraging insights from this review to develop a structured framework for academics and researchers. This framework aims to assist scholars in identifying the most impactful publications for their scholarly pursuits. Subsequently, the study will apply and trial the developed framework on selected scholarly articles within the sustainability and supply chain studies domain to evaluate its efficacy, practicality, and reliability. Design/Methodology/Approach: Utilizing the "Publish or Perish" tool, a search was conducted to locate papers incorporating "sustainability" and "supply chain" in their titles. After rigorous filtering steps, a panel of university professors identified five crucial criteria for evaluating research robustness: average yearly citation counts (25%), scholarly contribution (25%), alignment of findings with objectives (15%), methodological rigor (20%), and journal impact factor (15%). These five evaluation criteria are abbreviated as “ACMAJ" framework. Each paper then received a tiered score (1-3) for each criterion, normalized within its category, and summed using weighted averages to calculate a Final Normalized Score (FNS). This systematic approach allows for objective comparison and ranking of the research based on its impact, novelty, rigor, and publication venue. Findings: The study's findings highlight the lack of structured frameworks for assessing influential sustainability research in supply chain management, which often results in a dependence on citation counts. A complete model that incorporates five essential criteria has been suggested as a response. By conducting a methodical trial on specific academic articles in the field of sustainability and supply chain studies, the model demonstrated its effectiveness as a tool for identifying and selecting influential research papers that warrant additional attention. This work aims to fill a significant deficiency in existing techniques by providing a more comprehensive approach to identifying and ranking influential papers in the field. Practical Implications: The developed framework helps scholars identify the most influential sustainability and supply chain publications. Its validation serves the academic community by offering a credible tool and helping researchers, students, and practitioners find and choose influential papers. This approach aids field literature reviews and study suggestions. Analysis of major trends and topics deepens our grasp of this critical study area's changing terrain. Originality/Value: The framework stands as a unique contribution to academia, offering scholars an important and new tool to identify and validate influential publications. Its distinctive capacity to efficiently guide scholars, learners, and professionals in selecting noteworthy publications, coupled with the examination of key patterns and themes, adds depth to our understanding of the evolving landscape in this critical field of study.Keywords: supply chain management, sustainability, framework, model
Procedia PDF Downloads 521066 Implementing Effective Mathematical-Discussion Programme for Mathematical Competences in Primary School Classroom in South Korea
Authors: Saeyoung Lee
Abstract:
As the enthusiasm for education in Korea is too much high, it is well known by others that children in Korea get good scores in Mathematics. However, behind of this good reputation, children in Korea are easy to get lose self-confidence, tend to complaint and rarely participate in the class because of too much competition which leads to lack of competences. In this regard, the main goals of this paper are, by applying the programme based on peer-communication on Mathematics education field, it would like to improve self-managemental competence to make children gain self-confidence, communicative competence to make them deal with complaint and communitive competence to make them participated in the class for the age of 10 children to solve this problem. 14 children the age of 10 in one primary school in Gangnam, Seoul, Korea had participated in the research from March 2018 to October 2018. They were under the programme based on peer-communication during the period. Every Mathematics class maintained the same way. Firstly a problem was given to children. Secondly, children were asked to find many ways to solve the problem as much as they could by themselves. Thirdly all ways to solve the problem by children were posted on the board and three of the children made a group to distinguish the ways from valid to invalid. Lastly, all children made a discuss to find one way which is the most efficient among valid ways. Pre-test was carried out by the questionnaire based on Likert scale before applying the programme. The result of the pre-test was 3.89 for self-managemental competence, 3.91 for communicative competence and 4.19 for communitive competence. Post-test was carried out by the same questionnaire after applying the programme. The result of the post-test was 3.93 for self-managemental competence, 4.23 for communicative competence and 4.20 for communitive competence. That means by applying the programme based on peer-communication on Mathematics education field, the age of 10 children in Korea could improve self-managemental, communicative and communitive competence. Especially it works very well on communicative competence by increasing 0.32 points as it marked. Considering this research, Korean Mathematics education based on competition which leads to lack of competences should be changed to cooperative structure to make students more competent rather than just getting good scores. In conclusion, innovative teaching methods which are focused on improving competences such as the programme based on peer-communication which was applied in this research are strongly required to be studied and widely used.Keywords: competences, mathematics education, peer-communication, primary education
Procedia PDF Downloads 1331065 Usability Testing on Information Design through Single-Lens Wearable Device
Authors: Jae-Hyun Choi, Sung-Soo Bae, Sangyoung Yoon, Hong-Ku Yun, Jiyoung Kwahk
Abstract:
This study was conducted to investigate the effect of ocular dominance on recognition performance using a single-lens smart display designed for cycling. A total of 36 bicycle riders who have been cycling consistently were recruited and participated in the experiment. The participants were asked to perform tasks riding a bicycle on a stationary stand for safety reasons. Independent variables of interest include ocular dominance, bike usage, age group, and information layout. Recognition time (i.e., the time required to identify specific information measured with an eye-tracker), error rate (i.e. false answer or failure to identify the information in 5 seconds), and user preference scores were measured and statistical tests were conducted to identify significant results. Recognition time and error ratio showed significant difference by ocular dominance factor, while the preference score did not. Recognition time was faster when the single-lens see-through display on the dominant eye (average 1.12sec) than on the non-dominant eye (average 1.38sec). Error ratio of the information recognition task was significantly lower when the see-through display was worn on the dominant eye (average 4.86%) than on the non-dominant eye (average 14.04%). The interaction effect of ocular dominance and age group was significant with respect to recognition time and error ratio. The recognition time of the users in their 40s was significantly longer than the other age groups when the display was placed on the non-dominant eye, while no difference was observed on the dominant eye. Error ratio also showed the same pattern. Although no difference was observed for the main effect of ocular dominance and bike usage, the interaction effect between the two variables was significant with respect to preference score. Preference score of daily bike users was higher when the display was placed on the dominant eye, whereas participants who use bikes for leisure purposes showed the opposite preference patterns. It was found more effective and efficient to wear a see-through display on the dominant eye than on the non-dominant eye, although user preference was not affected by ocular dominance. It is recommended to wear a see-through display on the dominant eye since it is safer by helping the user recognize the presented information faster and more accurately, even if the user may not notice the difference.Keywords: eye tracking, information recognition, ocular dominance, smart headware, wearable device
Procedia PDF Downloads 2721064 Seismic Retrofit of Tall Building Structure with Viscous, Visco-Elastic, Visco-Plastic Damper
Authors: Nicolas Bae, Theodore L. Karavasilis
Abstract:
Increasingly, a large number of new and existing tall buildings are required to improve their resilient performance against strong winds and earthquakes to minimize direct, as well as indirect damages to society. Those advent stationary functions of tall building structures in metropolitan regions can be severely hazardous, in socio-economic terms, which also increase the requirement of advanced seismic performance. To achieve these progressive requirements, the seismic reinforcement for some old, conventional buildings have become enormously costly. The methods of increasing the buildings’ resilience against wind or earthquake loads have also become more advanced. Up to now, vibration control devices, such as the passive damper system, is still regarded as an effective and an easy-to-install option, in improving the seismic resilience of buildings at affordable prices. The main purpose of this paper is to examine 1) the optimization of the shape of visco plastic brace damper (VPBD) system which is one of hybrid damper system so that it can maximize its energy dissipation capacity in tall buildings against wind and earthquake. 2) the verification of the seismic performance of the visco plastic brace damper system in tall buildings; up to forty-storey high steel frame buildings, by comparing the results of Non-Linear Response History Analysis (NLRHA), with and without a damper system. The most significant contribution of this research is to introduce the optimized hybrid damper system that is adequate for high rise buildings. The efficiency of this visco plastic brace damper system and the advantages of its use in tall buildings can be verified since tall buildings tend to be affected by wind load at its normal state and also by earthquake load after yielding of steel plates. The modeling of the prototype tall building will be conducted using the Opensees software. Three types of modeling were used to verify the performance of the damper (MRF, MRF with visco-elastic, MRF with visco-plastic model) 22-set seismic records used and the scaling procedure was followed according to the FEMA code. It is shown that MRF with viscous, visco-elastic damper, it is superior effective to reduce inelastic deformation such as roof displacement, maximum story drift, roof velocity compared to the MRF only.Keywords: tall steel building, seismic retrofit, viscous, viscoelastic damper, performance based design, resilience based design
Procedia PDF Downloads 1931063 Comparison of Early Post-operative Outcomes of Cardiac Surgery Patients Who Have Had Blood Transfusion Based on Fixed Cut-off Point versus of Change in Percentage of Basic Hematocrit Levels
Authors: Khosro Barkhordari, Fateme Sadr, Mina Pashang
Abstract:
Back ground: Blood transfusion is one of the major issues in cardiac surgery patients. Transfusing patients based on fixed cut-off points of hemoglobin is the current protocol in most institutions. The hemoglobin level of 7- 10 has been suggested for blood transfusion in cardiac surgery patients. We aimed to evaluate if blood transfusion based on change in percentage of hematocrit has different outcomes. Methods: In this retrospective cohort study, we investigated the early postoperative outcome of cardiac surgery patients who received blood transfusions at Tehran Heart Center Hospital, IRAN. We reviewed and analyzed the basic characteristics and clinical data of 700 patients who met our exclusion and inclusion criteria. The two groups of blood transfused patients were compared, those who have 30-50 percent decrease in basal hematocrit versus those with 10 -29 percent decrease. Results: This is ongoing study, and the results would be completed in two weeks after analysis of the date. Conclusion: Early analysis has shown no difference in early post-operative outcomes between the two groups, but final analysis will be completed in two weeks. 1-Department of Anesthesiology and Critical Care, Tehran Heart Center, Tehran University of Medical Sciences, Tehran, IRAN 2- Department of Research, Tehran Heart Center, Tehran, IRAN Quantitative variables were compared using the Student t-test or the Mann‐Whitney U test, as appropriate, while categorical variables were compared using the χ2 or the Fisher exact test, as required. Our intention was to compare the early postoperative outcomes between the two groups, which include 30 days mortality, Length of ICU stay, Length of hospital stay, Intubation time, Infection rate, acute kidney injury, and respiratory complications. The main goal was to find if transfusing blood based on changes in hematocrit from a basal level was better than to fixed cut-off point regarding early post-operative outcomes. This has not been studied enough and may need randomized control trials.Keywords: post-operative, cardiac surgery, outcomes, blood transfusion
Procedia PDF Downloads 861062 An Evaluation of Medical Waste in Health Facilities through Data Envelopment Analysis (DEA) Method: Turkey-Amasya Public Hospitals Union Model
Authors: Murat Iskender Aktaş, Sadi Ergin, Rasime Acar Aktaş
Abstract:
In the light of fast-paced changes and developments in the health sector, the Ministry of Health started a new structuring with decree law numbered 663 within the scope of the Project of Transformation in Health. Accordingly, hospitals should ensure patient satisfaction through more efficient, more effective use of resources and sustainable finance by placing patients in the centre and should operate to increase efficiency to its maximum level while doing these. Within this study, in order to find out how efficient the hospitals were in terms of medical waste management between the years 2011-2014, the data from six hospitals of Amasya Public Hospitals Union were evaluated separately through Data Envelopment Analysis (DEA) method. First of all, input variables were determined. Input variables were the number of patients admitted to polyclinics, the number of inpatients in clinics, the number of patients who were operated and the number of patients who applied to the laboratory. Output variable was the cost of medical wastes in Turkish liras. Each hospital’s total medical waste level before and after public hospitals union; the amounts of average medical waste per patient admitted to polyclinics, per inpatient in clinics, per patient admitted to laboratory and per operated patient were compared within each group. In addition, average medical waste levels and costs were compared for Turkey in general and Europe in general. Paired samples t-test was used to find out whether the changes (increase-decrease) after public hospitals union were statistically significant. The health facilities that were unsuccessful in terms of medical waste management before and after public hospital union and the factors that caused this failure were determined. Based on the results, for each health facility that was ineffective in terms of medical waste management, the level of improvement required for each input was determined. The results of the study showed that there was an improvement in medical waste management applications after the health facilities became a member of public hospitals union; their medical waste levels were lower than the average of Turkey and Europe while the averages of cost of disposal were the highest.Keywords: medical waste management, cost of medical waste, public hospitals, data envelopment analysis
Procedia PDF Downloads 4151061 A Clinico-Bacteriological Study and Their Risk Factors for Diabetic Foot Ulcer with Multidrug-Resistant Microorganisms in Eastern India
Authors: Pampita Chakraborty, Sukumar Mukherjee
Abstract:
This study was done to determine the bacteriological profile and antibiotic resistance of the isolates and to find out the potential risk factors for infection with multidrug-resistant organisms. Diabetic foot ulcer is a major medical, social, economic problem and a leading cause of morbidity and mortality, especially in the developing countries like India. 25 percent of all diabetic patients develop a foot ulcer at some point in their lives which is highly susceptible to infections and that spreads rapidly, leading to overwhelming tissue destruction and subsequent amputation. Infection with multidrug resistant organisms (MDRO) may increase the cost of management and may cause additional morbidity and mortality. Proper management of these infections requires appropriate antibiotic selection based on culture and antimicrobial susceptibility testing. Early diagnosis of microbial infections is aimed to institute the appropriate antibacterial therapy initiative to avoid further complications. A total of 200 Type 2 Diabetic Mellitus patients with infection were admitted at GD Hospital and Diabetes Institute, Kolkata. 60 of them who developed ulcer during the year 2013 were included in this study. A detailed clinical history and physical examination were carried out for every subject. Specimens for microbiological studies were obtained from ulcer region. Gram-negative bacilli were tested for extended spectrum Beta-lactamase (ESBL) production by double disc diffusion method. Staphylococcal isolates were tested for susceptibility to oxacillin by screen agar method and disc diffusion. Potential risk factors for MDRO-positive samples were explored. Gram-negative aerobes were most frequently isolated, followed by gram-positive aerobes. Males were predominant in the study and majority of the patients were in the age group of 41-60 years. The presence of neuropathy was observed in 80% cases followed by peripheral vascular disease (73%). Proteus spp. (22) was the most common pathogen isolated, followed by E.coli (17). Staphylococcus aureus was predominant amongst the gram-positive isolates. S.aureus showed a high rate of resistance to antibiotic tested (63.6%). Other gram-positive isolates were found to be highly resistant to erythromycin, tetracycline and ciprofloxacin, 40% each. All isolates were found to be sensitive to Vancomycin and Linezolid. ESBL production was noted in Proteus spp and E.coli. Approximately 70 % of the patients were positive for MDRO. MDRO-infected patients had poor glycemic control (HbA1c 11± 2). Infection with MDROs is common in diabetic foot ulcers and is associated with risk factors like inadequate glycemic control, the presence of neuropathy, osteomyelitis, ulcer size and increased the requirement for surgical treatment. There is a need for continuous surveillance of resistant bacteria to provide the basis for empirical therapy and reduce the risk of complications.Keywords: diabetic foot ulcer, bacterial infection, multidrug-resistant organism, extended spectrum beta-lactamase
Procedia PDF Downloads 3371060 A Comparison of Antibiotic Resistant Enterobacteriaceae: Diabetic versus Non-Diabetic Infections
Authors: Zainab Dashti, Leila Vali
Abstract:
Background: The Middle East, in particular Kuwait, contains one of the highest rates of patients with Diabetes in the world. Generally, infections resistant to antibiotics among the diabetic population has been shown to be on the rise. This is the first study in Kuwait to compare the antibiotic resistance profiles and genotypic differences between the resistant isolates of Enterobacteriaceae obtained from diabetic and non-diabetic patients. Material/Methods: In total, 65 isolates were collected from diabetic patients consisting of 34 E. coli, 15 K. pneumoniae and 16 other Enterobacteriaceae species (including Salmonella spp. Serratia spp and Proteus spp.). In our control group, a total of 49 isolates consisting of 37 E. coli, 7 K. pneumoniae and 5 other species (including Salmonella spp. Serratia spp and Proteus spp.) were included. Isolates were identified at the species level and antibiotic resistance profiles, including Colistin, were determined using initially the Vitek system followed by double dilution MIC and E-test assays. Multi drug resistance (MDR) was defined as isolates resistant to a minimum of three antibiotics from three different classes. PCR was performed to detect ESBL genes (blaCTX-M, blaTEM & blaSHV), flouroquinolone resistance genes (qnrA, qnrB, qnrS & aac(6’)-lb-cr) and carbapenem resistance genes (blaOXA, blaVIM, blaGIM, blaKPC, blaIMP, & blaNDM) in both groups. Pulse field gel electrophoresis (PFGE) was performed to compare clonal relatedness of both E. coli and K.pneumonaie isolates. Results: Colistin resistance was determined in three isolates with MICs of 32-128 mg/L. A significant difference in resistance to ampicillin (Diabetes 93.8% vs control 72.5%, P value <0.002), augmentin (80% vs 52.5%, p value < 0.003), cefuroxime (69.2% vs 45%, p value < 0.0014), ceftazadime (73.8% vs 42.5%, p value <0.001) and ciprofloxacin (67.6% vs 40%, p value < 0.005) were determined. Also, a significant difference in MDR rates between the two groups (Diabetes 76.9%, control 57.5%, p value <0.036 were found. All antibiotic resistance genes showed a higher prevalence among the diabetic group, except for blaCTX-M, which was higher among the control group. PFGE showed a high rate of diversity between each group of isolates. Conclusions: Our results suggested an alarming rate of antibiotic resistance, in particular Colistin resistance (1.8%) among K. pneumoniea isolated from diabetic patients in Kuwait. MDR among Enterobacteriaceae infections also seems to be a worrying issue among the diabetics of Kuwait. More efforts are required to limit the issue of antibiotic resistance in Kuwait, especially among patients with diabetes mellitus.Keywords: antibiotic resistance, diabetes, enterobacreriacae, multi antibiotic resistance
Procedia PDF Downloads 3651059 Using Rainfall Simulators to Design and Assess the Post-Mining Erosional Stability
Authors: Ashraf M. Khalifa, Hwat Bing So, Greg Maddocks
Abstract:
Changes to the mining environmental approvals process in Queensland have been rolled out under the MERFP Act (2018). This includes requirements for a Progressive Rehabilitation and Closure Plan (PRC Plan). Key considerations of the landform design report within the PRC Plan must include: (i) identification of materials available for landform rehabilitation, including their ability to achieve the required landform design outcomes, (ii) erosion assessments to determine landform heights, gradients, profiles, and material placement, (iii) slope profile design considering the interactions between soil erodibility, rainfall erosivity, landform height, gradient, and vegetation cover to identify acceptable erosion rates over a long-term average, (iv) an analysis of future stability based on the factors described above e.g., erosion and /or landform evolution modelling. ACARP funded an extensive and thorough erosion assessment program using rainfall simulators from 1998 to 2010. The ACARP program included laboratory assessment of 35 soil and spoil samples from 16 coal mines and samples from a gold mine in Queensland using 3 x 0.8 m laboratory rainfall simulator. The reliability of the laboratory rainfall simulator was verified through field measurements using larger flumes 20 x 5 meters and catchment scale measurements at three sites (3 different catchments, average area of 2.5 ha each). Soil cover systems are a primary component of a constructed mine landform. The primary functions of a soil cover system are to sustain vegetation and limit the infiltration of water and oxygen into underlying reactive mine waste. If the external surface of the landform erodes, the functions of the cover system cannot be maintained, and the cover system will most likely fail. Assessing a constructed landform’s potential ‘long-term’ erosion stability requires defensible erosion rate thresholds below which rehabilitation landform designs are considered acceptably erosion-resistant or ‘stable’. The process used to quantify erosion rates using rainfall simulators (flumes) to measure rill and inter-rill erosion on bulk samples under laboratory conditions or on in-situ material under field conditions will be explained.Keywords: open-cut, mining, erosion, rainfall simulator
Procedia PDF Downloads 1011058 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 2341057 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 1771056 Preparation of Silver and Silver-Gold, Universal and Repeatable, Surface Enhanced Raman Spectroscopy Platforms from SERSitive
Authors: Pawel Albrycht, Monika Ksiezopolska-Gocalska, Robert Holyst
Abstract:
Surface Enhanced Raman Spectroscopy (SERS) is a technique of growing importance not only in purely scientific research related to analytical chemistry. It finds more and more applications in broadly understood testing - medical, forensic, pharmaceutical, food - and everywhere works perfectly, on one condition that SERS substrates used for testing give adequate enhancement, repeatability, and homogeneity of SERS signal. This is a problem that has existed since the invention of this technique. Some laboratories use as SERS amplifiers colloids with silver or gold nanoparticles, others form rough silver or gold surfaces, but results are generally either weak or unrepeatable. Furthermore, these structures are very often highly specific - they amplify the signal only of a small group of compounds. It means that they work with some kinds of analytes but only with those which were used at a developer’s laboratory. When it comes to research on different compounds, completely new SERS 'substrates' are required. That underlay our decision to develop universal substrates for the SERS spectroscopy. Generally, each compound has different affinity for both silver and gold, which have the best SERS properties, and that's what depends on what signal we get in the SERS spectrum. Our task was to create the platform that gives a characteristic 'fingerprint' of the largest number of compounds with very high repeatability - even at the expense of the intensity of the enhancement factor (EF) (possibility to repeat research results is of the uttermost importance). As specified above SERS substrates are offered by SERSitive company. Applied method is based on cyclic potentiodynamic electrodeposition of silver or silver-gold nanoparticles on the conductive surface of ITO-coated glass at controlled temperature of the reaction solution. Silver nanoparticles are supplied in the form of silver nitrate (AgNO₃, 10 mM), gold nanoparticles are derived from tetrachloroauric acid (10 mM) while sodium sulfite (Na₂O₃, 5 mM) is used as a reductor. To limit and standardize the size of the SERS surface on which nanoparticles are deposited, photolithography is used. We secure the desired ITO-coated glass surface, and then etch the unprotected ITO layer which prevents nanoparticles from settling at these sites. On the prepared surface, we carry out the process described above, obtaining SERS surface with nanoparticles of sizes 50-400 nm. The SERSitive platforms present highly sensitivity (EF = 10⁵-10⁶), homogeneity and repeatability (70-80%).Keywords: electrodeposition, nanoparticles, Raman spectroscopy, SERS, SERSitive, SERS platforms, SERS substrates
Procedia PDF Downloads 155