Search results for: site selection optimization
5344 The Evaluation of Superiority of Foot Local Anesthesia Method in Dairy Cows
Authors: Samaneh Yavari, Christiane Pferrer, Elisabeth Engelke, Alexander Starke, Juergen Rehage
Abstract:
Background: Nowadays, bovine limb interventions, especially any claw surgeries, raises selection of the most qualified and appropriate local anesthesia technique applicable for any superficial or deep interventions of the limbs. Currently, two local anesthesia methods of Intravenous Regional Anesthesia (IVRA), as well as Nerve Blocks, have been routine to apply. However, the lack of studies investigating the quality and duration as well as quantity and onset of full (complete) local anesthesia, is noticeable. Therefore, the aim of our study was comparing the onset and quality of both IVRA and our modified NBA at the hind limb of dairy cows. For this abstract, only the onset of full local anesthesia would be consider. Materials and Methods: For that reason, we used six healthy non pregnant non lactating Holestein Frisian cows in a cross-over study design. Those cows divided into two groups to receive IVRA and our modified four-point NBA. For IVRA, 20 ml procaine without epinephrine was injected into the vein digitalis dorsalis communis III and for our modified four-point NBA, 10-15 ml procaine without epinephrine preneurally to the nerves, superficial and deep peroneal as well as lateral and medial branches of metatarsal nerves. For pain stimulation, electrical stimulator Grass S48 was applied. Results: The results of electrical stimuli revealed the faster onset of full local anesthesia (p < 0.05) by application of our modified NBA in comparison to IVRA about 10 minutes. Conclusion and discussion: Despite of available references showing faster onset of foot local anesthesia of IVRA, our study demonstrated that our modified four point NBA not only can be well known as a standard foot local anesthesia method applicable to desensitize the hind limb of dairy cows, but also, selection of this modified validated local anesthesia method can lead to have a faster start of complete desensitization of distal hind limb that is remarkable in any bovine limb interventions under time constraint.Keywords: IVRA, four point NBA, dairy cow, hind limb, full onset
Procedia PDF Downloads 1555343 A Fine-Grained Scheduling Algorithm for Heterogeneous Supercomputing Clusters Based on Graph Convolutional Networks and Proximal Policy Optimization
Authors: Jiahao Zhou, Lei Wang
Abstract:
In heterogeneous supercomputing clusters, designing an efficient scheduling strategy is crucial for enhancing both energy efficiency and workflow execution performance. The dynamic allocation and reclamation of computing resources are essential for improving resource utilization. However, existing studies often allocate fixed resources to jobs prior to execution, maintaining these resources until job completion, which overlooks the importance of dynamic scheduling. This paper proposes a heterogeneous hierarchical fine-grained scheduling algorithm (HeHiFiS) based on graph convolutional networks (GCN) and proximal policy optimization (PPO) to address issues such as prolonged workflow completion times and low resource utilization in heterogeneous supercomputing clusters. Specifically, GCN is employed to extract task dependency features as part of the state information, and the PPO reinforcement learning algorithm is then used to train the scheduling policy. The trained scheduling policy dynamically adjusts scheduling actions during operation based on the continuously changing states of tasks and computing resources. Additionally, we developed a heterogeneous scheduling simulation platform to validate the effectiveness of the proposed algorithm. Experimental results indicate that HeHiFiS, by incorporating resource inheritance and intra-task parallel mechanisms, significantly improves resource utilization. Compared to existing scheduling algorithms, HeHiFiS achieves over a 50% improvement in both job completion and response performance metrics.Keywords: heterogeneous, dynamic scheduling, GCN, PPO
Procedia PDF Downloads 85342 Palliative Orthovoltage Radiotherapy and Subcutaneous Infusion of Carboplatin for Treatment of Appendicular Osteosarcoma in Dogs
Authors: Kathryn L. Duncan, Charles A. Kuntz, Alessandra C. Santamaria, James O. Simcock
Abstract:
Access to megavoltage radiation therapy for small animals is limited in many locations around the world. This can preclude the use of palliative radiation therapy for the treatment of appendicular osteosarcoma in dogs. The objective of this study was to retrospectively assess the adverse effects and survival times of dogs with appendicular osteosarcoma that were treated with hypofractionated orthovoltage radiation therapy and adjunctive carboplatin chemotherapy administered via a single subcutaneous infusion. Medical records were reviewed retrospectively to identify client-owned dogs with spontaneously occurring appendicular osteosarcoma that was treated with palliative orthovoltage radiation therapy and a single subcutaneous infusion of carboplatin. Data recorded included signalment, tumour location, results of diagnostic imaging, haematologic and serum biochemical analyses, adverse effects of radiation therapy and chemotherapy, and survival times. Kaplan-Meier survival analysis was performed, and log-rank analysis was used to determine the impact of specific patient variables on survival time. Twenty-three dogs were identified that met the inclusion criteria. Median survival time for dogs was 182 days. Eleven dogs had adverse haematologic effects, 3 had adverse gastrointestinal effects, 6 had adverse effects at the radiation site and 7 developed infections at the carboplatin infusion site. No statistically significant differences were identified in survival times based on sex, tumour location, development of infection, or pretreatment serum alkaline phosphatase. Median survival time and incidence of adverse effects were comparable to those previously reported in dogs undergoing palliative radiation therapy with megavoltage or cobalt radiation sources and conventional intravenous carboplatin chemotherapy. The use of orthovoltage palliative radiation therapy may be a reasonable alternative to megavoltage radiation in locations where access is limited.Keywords: radiotherapy, veterinary oncology, chemotherapy, osteosarcoma
Procedia PDF Downloads 755341 A Scientific Method of Drug Development Based on Ayurvedic Bhaishajya Knowledge
Authors: Rajesh S. Mony, Vaidyaratnam Oushadhasala
Abstract:
An attempt is made in this study to evolve a drug development modality based on classical Ayurvedic knowledge base as well as on modern scientific methodology. The present study involves (a) identification of a specific ailment condition, (b) the selection of a polyherbal formulation, (c) deciding suitable extraction procedure, (d) confirming the efficacy of the combination by in-vitro trials and (e) fixing up the recommended dose. The ailment segment selected is arthritic condition. The selected herbal combination is Kunturushka, Vibhitaki, Guggulu, Haridra, Maricha and Nirgundi. They were selected as per Classical Ayurvedic references, Authentified as per API (Ayurvedic Pharmacopeia of India), Extraction of each drug was done by different ratios of Hydroalcoholic menstrums, Invitro assessment of each extract after removing residual solvent for anti-Inflammatory, anti-arthritic activities (by UV-Vis. Spectrophotometer with positive control), Invitro assessment of each extract for COX enzyme inhibition (by UV-Vis. Spectrophotometer with positive control), Selection of the extracts was made having good in-vitro activity, Performed the QC testing of each selected extract including HPTLC, that is the in process QC specifications, h. Decision of the single dose with mixtures of selected extracts was made as per the level of in-vitro activity and available toxicology data, Quantification of major groups like Phenolics, Flavonoids, Alkaloids and Bitters was done with both standard Spectrophotometric and Gravimetric methods, Method for Marker assay was developed and validated by HPTLC and a good resolved HPTLC finger print was developed for the single dosage API (Active Pharmaceutical Ingredient mixture of extracts), Three batches was prepared to fix the in process and API (Active Pharmaceutical Ingredient) QC specifications.Keywords: drug development, antiinflammatory, quality stardardisation, planar chromatography
Procedia PDF Downloads 1045340 Multi-Criteria Decision Making Network Optimization for Green Supply Chains
Authors: Bandar A. Alkhayyal
Abstract:
Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains
Procedia PDF Downloads 1625339 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 1745338 Dynamic Web-Based 2D Medical Image Visualization and Processing Software
Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail
Abstract:
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN
Procedia PDF Downloads 1655337 Research on the Function Optimization of China-Hungary Economic and Trade Cooperation Zone
Authors: Wenjuan Lu
Abstract:
China and Hungary have risen from a friendly and comprehensive cooperative relationship to a comprehensive strategic partnership in recent years, and the economic and trade relations between the two countries have developed smoothly. As an important country along the ‘Belt and Road’, Hungary and China have strong economic complementarities and have unique advantages in carrying China's industrial transfer and economic transformation and development. The construction of the China-Hungary Economic and Trade Cooperation Zone, which was initiated by the ‘Sino-Hungarian Borsod Industrial Zone’ and the ‘Hungarian Central European Trade and Logistics Cooperation Park’ has promoted infrastructure construction, optimized production capacity, promoted industrial restructuring, and formed brand and agglomeration effects. Enhancing the influence of Chinese companies in the European market has also promoted economic development in Hungary and even in Central and Eastern Europe. However, as the China-Hungary Economic and Trade Cooperation Zone is still in its infancy, there are still shortcomings such as small scale, single function, and no prominent platform. In the future, based on the needs of China's cooperation with ‘17+1’ and China-Hungary cooperation, on the basis of appropriately expanding the scale of economic and trade cooperation zones and appropriately increasing the number of economic and trade cooperation zones, it is better to focus on optimizing and adjusting its functions and highlighting different economic and trade cooperation. The differentiated function of the trade zones strengthens the multi-faceted cooperation of economic and trade cooperation zones and highlights its role as a platform for cooperation in information, capital, and services.Keywords: ‘One Belt, One Road’ Initiative, China-Hungary economic and trade cooperation zone, function optimization, Central and Eastern Europe
Procedia PDF Downloads 1845336 A User-Directed Approach to Optimization via Metaprogramming
Authors: Eashan Hatti
Abstract:
In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.Keywords: optimization, metaprogramming, logic programming, abstraction
Procedia PDF Downloads 915335 Effect of Information and Communication Technology (ICT) Usage by Cassava Farmers in Otukpo Local Government Area of Benue State, Nigeria
Authors: O. J. Ajayi, J. H. Tsado, F. Olah
Abstract:
The study analyzed the effect of information and communication technology (ICT) usage on cassava farmers in Otukpo local government area of Benue state, Nigeria. Primary data was collected from 120 randomly selected cassava farmers using multi-stage sampling technique. A structured questionnaire and interview schedule was employed to generate data. Data were analyzed using descriptive (frequency, mean and percentage) and inferential statistics (OLS (ordinary least square) and Chi-square). The result revealed that majority (78.3%) were within the age range of 21-50 years implying that the respondents were within the active age for maximum production. 96.8% of the respondents had one form of formal education or the other. The sources of ICT facilities readily available in area were radio(84.2%), television(64.2%) and mobile phone(90.8%) with the latter being the most relied upon for cassava farming. Most of the farmers were aware (98.3%) and had access (95.8%) to these ICT facilities. The dependence on mobile phone and radio were highly relevant in cassava stem selection, land selection, land preparation, cassava planting technique, fertilizer application and pest and disease management. The value of coefficient of determination (R2) indicated an 89.1% variation in the output of cassava farmers explained by the inputs indicated in the regression model implying that, there is a positive and significant relationship between the inputs and output. The results also indicated that labour, fertilizer and farm size were significant at 1% level of probability while ICT use was significant at 10%. Further findings showed that finance (78.3%) was the major constraint associated with ICT use. Recommendations were made on strengthening the use of ICT especially contemporary ones like the computer and internet among farmers for easy information sourcing which can boost agricultural production, improve livelihood and subsequently food security. This may be achieved by providing credit or subsidies and information centres like telecentres and cyber cafes through government assistance or partnership.Keywords: ICT, cassava farmers, inputs, output
Procedia PDF Downloads 3135334 Drugstore Control System Design and Realization Based on Programmable Logic Controller (PLC)
Authors: Muhammad Faheem Khakhi, Jian Yu Wang, Salman Muhammad, Muhammad Faisal Shabir
Abstract:
Population growth and Chinese two-child policy will boost pharmaceutical market, and it will continue to maintain the growth for a period of time in the future, the traditional pharmacy dispensary has been unable to meet the growing medical needs of the peoples. Under the strong support of the national policy, the automatic transformation of traditional pharmacies is the inclination of the Times, the new type of intelligent pharmacy system will continue to promote the development of the pharmaceutical industry. Under this background, based on PLC control, the paper proposed an intelligent storage and automatic drug delivery system; complete design of the lower computer's control system and the host computer's software system has been present. The system can be applied to dispensing work for Chinese herbal medicinal and Western medicines. Firstly, the essential of intelligent control system for pharmacy is discussed. After the analysis of the requirements, the overall scheme of the system design is presented. Secondly, introduces the software and hardware design of the lower computer's control system, including the selection of PLC and the selection of motion control system, the problem of the human-computer interaction module and the communication between PC and PLC solves, the program design and development of the PLC control system is completed. The design of the upper computer software management system is described in detail. By analyzing of E-R diagram, built the establish data, the communication protocol between systems is customize, C++ Builder is adopted to realize interface module, supply module, main control module, etc. The paper also gives the implementations of the multi-threaded system and communication method. Lastly, each module of the lower computer control system is tested. Then, after building a test environment, the function test of the upper computer software management system is completed. On this basis, the entire control system accepts the overall test.Keywords: automatic pharmacy, PLC, control system, management system, communication
Procedia PDF Downloads 3145333 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 1085332 San Francisco Public Utilities Commission Headquarters "The Greenest Urban Building in the United States"
Authors: Charu Sharma
Abstract:
San Francisco Public Utilities Commission’s Headquarters was listed in the 2013-American Institute of Architects Committee of the Environment (AIA COTE) Top Ten Green Projects. This 13-story, 277,000-square-foot building, housing more than 900 of the agency’s employees was completed in June 2012. It was designed to achieve LEED Platinum Certification and boasts a plethora of green features to significantly reduce the use of energy and water consumption, and provide a healthy office work environment with high interior air quality and natural daylight. Key sustainability features include on-site clean energy generation through renewable photovoltaic and wind sources providing $118 million in energy cost savings over 75 years; 45 percent daylight harvesting; and the consumption of 55 percent less energy and a 32 percent less electricity demand from the main power grid. It uses 60 percent less water usage than an average 13-story office building as most of that water will be recycled for non-potable uses at the site, running through a system of underground tanks and artificial wetlands that cleans and clarifies whatever is flushed down toilets or washed down drains. This is one of the first buildings in the nation with treatment of gray and black water. The building utilizes an innovative structural system with post tensioned cores that will provide the highest asset preservation for the building. In addition, the building uses a “green” concrete mixture that releases less carbon gases. As a public utility commission this building has set a good example for resource conservation-the building is expected to be cheaper to operate and maintain as time goes on and will have saved rate-payers $500 million in energy and water savings. Within the anticipated 100-year lifespan of the building, our ratepayers will save approximately $3.7 billion through the combination of rental savings, energy efficiencies, and asset ownership.Keywords: energy efficiency, sustainability, resource conservation, asset ownership, rental savings
Procedia PDF Downloads 4405331 Optimization of Lead Bioremediation by Marine Halomonas sp. ES015 Using Statistical Experimental Methods
Authors: Aliaa M. El-Borai, Ehab A. Beltagy, Eman E. Gadallah, Samy A. ElAssar
Abstract:
Bioremediation technology is now used for treatment instead of traditional metal removal methods. A strain was isolated from Marsa Alam, Red sea, Egypt showed high resistance to high lead concentration and was identified by the 16S rRNA gene sequencing technique as Halomonas sp. ES015. Medium optimization was carried out using Plackett-Burman design, and the most significant factors were yeast extract, casamino acid and inoculums size. The optimized media obtained by the statistical design raised the removal efficiency from 84% to 99% from initial concentration 250 ppm of lead. Moreover, Box-Behnken experimental design was applied to study the relationship between yeast extract concentration, casamino acid concentration and inoculums size. The optimized medium increased removal efficiency to 97% from initial concentration 500 ppm of lead. Immobilized Halomonas sp. ES015 cells on sponge cubes, using optimized medium in loop bioremediation column, showed relatively constant lead removal efficiency when reused six successive cycles over the range of time interval. Also metal removal efficiency was not affected by flow rate changes. Finally, the results of this research refer to the possibility of lead bioremediation by free or immobilized cells of Halomonas sp. ES015. Also, bioremediation can be done in batch cultures and semicontinuous cultures using column technology.Keywords: bioremediation, lead, Box–Behnken, Halomonas sp. ES015, loop bioremediation, Plackett-Burman
Procedia PDF Downloads 2005330 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem
Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee
Abstract:
Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research
Procedia PDF Downloads 3405329 Robotic Arm-Automated Spray Painting with One-Shot Object Detection and Region-Based Path Optimization
Authors: Iqraq Kamal, Akmal Razif, Sivadas Chandra Sekaran, Ahmad Syazwan Hisaburi
Abstract:
Painting plays a crucial role in the aerospace manufacturing industry, serving both protective and cosmetic purposes for components. However, the traditional manual painting method is time-consuming and labor-intensive, posing challenges for the sector in achieving higher efficiency. Additionally, the current automated robot path planning has been a bottleneck for spray painting processes, as typical manual teaching methods are time-consuming, error-prone, and skill-dependent. Therefore, it is essential to develop automated tool path planning methods to replace manual ones, reducing costs and improving product quality. Focusing on flat panel painting in aerospace manufacturing, this study aims to address issues related to unreliable part identification techniques caused by the high-mixture, low-volume nature of the industry. The proposed solution involves using a spray gun and a UR10 robotic arm with a vision system that utilizes one-shot object detection (OS2D) to identify parts accurately. Additionally, the research optimizes path planning by concentrating on the region of interest—specifically, the identified part, rather than uniformly covering the entire painting tray.Keywords: aerospace manufacturing, one-shot object detection, automated spray painting, vision-based path optimization, deep learning, automation, robotic arm
Procedia PDF Downloads 855328 Stability Optimization of NABH₄ via PH and H₂O:NABH₄ Ratios for Large Scale Hydrogen Production
Authors: Parth Mehta, Vedasri Bai Khavala, Prabhu Rajagopal, Tiju Thomas
Abstract:
There is an increasing need for alternative clean fuels, and hydrogen (H₂) has long been considered a promising solution with a high calorific value (142MJ/kg). However, the storage of H₂ and expensive processes for its generation have hindered its usage. Sodium borohydride (NaBH₄) can potentially be used as an economically viable means of H₂ storage. Thus far, there have been attempts to optimize the life of NaBH₄ (half-life) in aqueous media by stabilizing it with sodium hydroxide (NaOH) for various pH values. Other reports have shown that H₂ yield and reaction kinetics remained constant for all ratios of H₂O to NaBH₄ > 30:1, without any acidic catalysts. Here we highlight the importance of pH and H₂O: NaBH₄ ratio (80:1, 40:1, 20:1 and 10:1 by weight), for NaBH₄ stabilization (half-life reaction time at room temperature) and corrosion minimization of H₂ reactor components. It is interesting to observe that at any particular pH>10 (e.g., pH = 10, 11 and 12), the H₂O: NaBH₄ ratio does not have the expected linear dependence with stability. On the contrary, high stability was observed at the ratio of 10:1 H₂O: NaBH₄ across all pH>10. When the H₂O: NaBH₄ ratio is increased from 10:1 to 20:1 and beyond (till 80:1), constant stability (% degradation) is observed with respect to time. For practical usage (consumption within 6 hours of making NaBH₄ solution), 15% degradation at pH 11 and NaBH₄: H₂O ratio of 10:1 is recommended. Increasing this ratio demands higher NaOH concentration at the same pH, thus requiring a higher concentration or volume of acid (e.g., HCl) for H₂ generation. The reactions are done with tap water to render the results useful from an industrial standpoint. The observed stability regimes are rationalized based on complexes associated with NaBH₄ when solvated in water, which depend sensitively on both pH and NaBH₄: H₂O ratio.Keywords: hydrogen, sodium borohydride, stability optimization, H₂O:NaBH₄ ratio
Procedia PDF Downloads 1275327 Stress Hyperglycaemia and Glycaemic Control Post Cardiac Surgery: Relaxed Targets May Be Acceptable
Authors: Nicholas Bayfield, Liam Bibo, Charley Budgeon, Robert Larbalestier, Tom Briffa
Abstract:
Introduction: Stress hyperglycaemia is common following cardiac surgery. Its optimal management is uncertain and may differ by diabetic status. This study assesses the in-hospital glycaemic management of cardiac surgery patients and associated postoperative outcomes. Methods: A retrospective cohort analysis of all patients undergoing cardiac surgery at Fiona Stanley Hospital from February 2015 to May 2019 was undertaken. Management and outcomes of hyperglycaemia following cardiac surgery were assessed. Follow-up was assessed to 1 year postoperatively. Multivariate regression modelling was utilised. Results: 1050 non-diabetic patients and 689 diabetic patients were included. In the non-diabetic cohort, patients with mild (peak blood sugar level [BSL] < 14.3), transient stress hyperglycaemia managed without insulin were not at an increased risk of wound-related morbidity (P=0.899) or mortality at 1 year (P=0.483). Insulin management was associated with wound-related readmission to hospital (P=0.004) and superficial sternal wound infection (P=0.047). Prolonged or severe stress hyperglycaemia was predictive of hospital re-admission (P=0.050) but not morbidity or mortality (P=0.546). Diabetes mellitus was an independent risk factor 1-year mortality (OR; 1.972 [1.041–3.736], P=0.037), graft harvest site wound infection (OR; 1.810 [1.134–2.889], P=0.013) and wound-related readmission (OR; 1.866 [1.076–3.236], P=0.026). In diabetics, postoperative peak BSL > 13.9mmol/L was predictive of graft harvest site infections (OR; 3.528 [1.724-7.217], P=0.001) and wound-related readmission OR; 3.462 [1.540-7.783], P=0.003) regardless of modality of management. A peak BSL of 10.0-13.9 did not increase the risk of morbidity/mortality compared to a peak BSL of < 10.0 (P=0.557). Diabetics with a peak BSL of 13.9 or less did not have significantly increased morbidity/mortality outcomes compared to non-diabetics (P=0.418). Conclusion: In non-diabetic patients, transient mild stress hyperglycaemia following cardiac surgery does not uniformly require treatment. In diabetic patients, postoperative hyperglycaemia with peak BSL exceeding 13.9mmol/L was associated with wound-related morbidity and hospital readmission following cardiac surgery.Keywords: cardiac surgery, pulmonary embolism, pulmonary embolectomy, cardiopulmonary bypass
Procedia PDF Downloads 1665326 Chaotic Sequence Noise Reduction and Chaotic Recognition Rate Improvement Based on Improved Local Geometric Projection
Authors: Rubin Dan, Xingcai Wang, Ziyang Chen
Abstract:
A chaotic time series noise reduction method based on the fusion of the local projection method, wavelet transform, and particle swarm algorithm (referred to as the LW-PSO method) is proposed to address the problem of false recognition due to noise in the recognition process of chaotic time series containing noise. The method first uses phase space reconstruction to recover the original dynamical system characteristics and removes the noise subspace by selecting the neighborhood radius; then it uses wavelet transform to remove D1-D3 high-frequency components to maximize the retention of signal information while least-squares optimization is performed by the particle swarm algorithm. The Lorenz system containing 30% Gaussian white noise is simulated and verified, and the phase space, SNR value, RMSE value, and K value of the 0-1 test method before and after noise reduction of the Schreiber method, local projection method, wavelet transform method, and LW-PSO method are compared and analyzed, which proves that the LW-PSO method has a better noise reduction effect compared with the other three common methods. The method is also applied to the classical system to evaluate the noise reduction effect of the four methods and the original system identification effect, which further verifies the superiority of the LW-PSO method. Finally, it is applied to the Chengdu rainfall chaotic sequence for research, and the results prove that the LW-PSO method can effectively reduce the noise and improve the chaos recognition rate.Keywords: Schreiber noise reduction, wavelet transform, particle swarm optimization, 0-1 test method, chaotic sequence denoising
Procedia PDF Downloads 2055325 Enhancement of Long Term Peak Demand Forecast in Peninsular Malaysia Using Hourly Load Profile
Authors: Nazaitul Idya Hamzah, Muhammad Syafiq Mazli, Maszatul Akmar Mustafa
Abstract:
The peak demand forecast is crucial to identify the future generation plant up needed in the long-term capacity planning analysis for Peninsular Malaysia as well as for the transmission and distribution network planning activities. Currently, peak demand forecast (in Mega Watt) is derived from the generation forecast by using load factor assumption. However, a forecast using this method has underperformed due to the structural changes in the economy, emerging trends and weather uncertainty. The dynamic changes of these drivers will result in many possible outcomes of peak demand for Peninsular Malaysia. This paper will look into the independent model of peak demand forecasting. The model begins with the selection of driver variables to capture long-term growth. This selection and construction of variables, which include econometric, emerging trend and energy variables, will have an impact on the peak forecast. The actual framework begins with the development of system energy and load shape forecast by using the system’s hourly data. The shape forecast represents the system shape assuming all embedded technology and use patterns to continue in the future. This is necessary to identify the movements in the peak hour or changes in the system load factor. The next step would be developing the peak forecast, which involves an iterative process to explore model structures and variables. The final step is combining the system energy, shape, and peak forecasts into the hourly system forecast then modifying it with the forecast adjustments. Forecast adjustments are among other sales forecasts for electric vehicles, solar and other adjustments. The framework will result in an hourly forecast that captures growth, peak usage and new technologies. The advantage of this approach as compared to the current methodology is that the peaks capture new technology impacts that change the load shape.Keywords: hourly load profile, load forecasting, long term peak demand forecasting, peak demand
Procedia PDF Downloads 1795324 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method
Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry
Abstract:
The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design
Procedia PDF Downloads 1575323 Multi-Objective Optimization for Aircraft Fleet Management: A Parametric Approach
Authors: Xin-Yu Li, Dung-Ying Lin
Abstract:
Fleet availability is a crucial indicator for an aircraft fleet. However, in practice, fleet planning involves many resource and safety constraints, such as annual and monthly flight training targets and maximum engine usage limits. Due to safety considerations, engines must be removed for mandatory maintenance and replacement of key components. This situation is known as the "threshold." The annual number of thresholds is a key factor in maintaining fleet availability. However, the traditional method heavily relies on experience and manual planning, which may result in ineffective engine usage and affect the flight missions. This study aims to address the challenges of fleet planning and availability maintenance in aircraft fleets with resource and safety constraints. The goal is to effectively optimize engine usage and maintenance tasks. This study has four objectives: minimizing the number of engine thresholds, minimizing the monthly lack of flight hours, minimizing the monthly excess of flight hours, and minimizing engine disassembly frequency. To solve the resulting formulation, this study uses parametric programming techniques and ϵ-constraint method to reformulate multi-objective problems into single-objective problems, efficiently generating Pareto fronts. This method is advantageous when handling multiple conflicting objectives. It allows for an effective trade-off between these competing objectives. Empirical results and managerial insights will be provided.Keywords: aircraft fleet, engine utilization planning, multi-objective optimization, parametric method, Pareto optimality
Procedia PDF Downloads 355322 Sustainable Technology and the Production of Housing
Authors: S. Arias
Abstract:
New housing developments and the technological changes that this implies, adapt the styles of living of its residents, as well as new family structures and forms of work due to the particular needs of a specific group of people which involves different techniques of dealing with, organize, equip and use a particular territory. Currently, own their own space is increasingly important and the cities are faced with the challenge of providing the opportunity for such demands, as well as energy, water and waste removal necessary in the process of construction and occupation of new human settlements. Until the day of today, not has failed to give full response to these demands and needs, resulting in cities that grow without control, badly used land, avenues and congested streets. Buildings and dwellings have an important impact on the environment and on the health of the people, therefore environmental quality associated with the comfort of humans to the sustainable development of natural resources. Applied to architecture, this concept involves the incorporation of new technologies in all the constructive process of a dwelling, changing customs of developers and users, what must be a greater effort in planning energy savings and thus reducing the emissions Greenhouse Gases (GHG) depending on the geographical location where it is planned to develop. Since the techniques of occupation of the territory are not the same everywhere, must take into account that these depend on the geographical, social, political, economic and climatic-environmental circumstances of place, which in modified according to the degree of development reached. In the analysis that must be undertaken to check the degree of sustainability of the place, it is necessary to make estimates of the energy used in artificial air conditioning and lighting. In the same way is required to diagnose the availability and distribution of the water resources used for hygiene and for the cooling of artificially air-conditioned spaces, as well as the waste resulting from these technological processes. Based on the results obtained through the different stages of the analysis, it is possible to perform an energy audit in the process of proposing recommendations of sustainability in architectural spaces in search of energy saving, rational use of water and natural resources optimization. The above can be carried out through the development of a sustainable building code in develop technical recommendations to the regional characteristics of each study site. These codes would seek to build bases to promote a building regulations applicable to new human settlements looking for is generated at the same time quality, protection and safety in them. This building regulation must be consistent with other regulations both national and municipal and State, such as the laws of human settlements, urban development and zoning regulations.Keywords: building regulations, housing, sustainability, technology
Procedia PDF Downloads 3495321 Formation of Mg-Silicate Scales and Inhibition of Their Scale Formation at Injection Wells in Geothermal Power Plant
Authors: Samuel Abebe Ebebo
Abstract:
Scale precipitation causes a major issue for geothermal power plants because it reduces the production rate of geothermal energy. Each geothermal power plant's different chemical and physical conditions can cause the scale to precipitate under a particular set of fluid-rock interactions. Depending on the mineral, it is possible to have scale in the production well, steam separators, heat exchangers, reinjection wells, and everywhere in between. The scale consists mainly of smectite and trace amounts of chlorite, magnetite, quartz, hematite, dolomite, aragonite, and amorphous silica. The smectite scale is one of the difficult scales at injection wells in geothermal power plants. X-ray diffraction and chemical composition identify this smectite as Stevensite. The characteristics and the scale of each injection well line are different depending on the fluid chemistry. The smectite scale has been widely distributed in pipelines and surface plants. Mineral water equilibrium showed that the main factors controlling the saturation indices of smectite increased pH and dissolved Mg concentration due to the precipitate on the equipment surface. This study aims to characterize the scales and geothermal fluids collected from the Onuma geothermal power plant in Akita Prefecture, Japan. Field tests were conducted on October 30–November 3, 2021, at Onuma to determine the pH control methods for preventing magnesium silicate scaling, and as exemplified, the formation of magnesium silicate hydrates (M-S-H) with MgO to SiO2 ratios of 1.0 and pH values of 10 for one day has been studied at 25 °C. As a result, M-S-H scale formation could be suppressed, and stevensite formation could also be suppressed when we can decrease the pH of the fluid by less than 8.1, 7.4, and 8 (at 97 °C) in the fluid from O-3Rb and O-6Rb, O-10Rg, and O-12R, respectively. In this context, the scales and fluids collected from injection wells at a geothermal power plant in Japan were analyzed and characterized to understand the formation conditions of Mg-silicate scales with on-site synthesis experiments. From the results of the characterizations and on-site synthesis experiments, the inhibition method of their scale formation is discussed based on geochemical modeling in this study.Keywords: magnesium silicate, scaling, inhibitor, geothermal power plant
Procedia PDF Downloads 715320 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 725319 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance
Authors: Sokkhey Phauk, Takeo Okazaki
Abstract:
The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.Keywords: academic performance prediction system, educational data mining, dominant factors, feature selection method, prediction model, student performance
Procedia PDF Downloads 1115318 The Complementary Effect of Internal Control System and Whistleblowing Policy on Prevention and Detection of Fraud in Nigerian Deposit Money Banks
Authors: Dada Durojaye Joshua
Abstract:
The study examined the combined effect of internal control system and whistle blowing policy while it pursues the following specific objectives, which are to: examine the relationship between monitoring activities and fraud’s detection and prevention; investigate the effect of control activities on fraud’s detection and prevention in Nigerian Deposit Money Banks (DMBs). The population of the study comprises the 89,275 members of staff in the 20 DMBs in Nigeria as at June 2019. Purposive and convenient sampling techniques were used in the selection of the 80 members of staff at the supervisory level of the Internal Audit Departments of the head offices of the sampled banks, that is, selecting 4 respondents (Audit Executive/Head, Internal Control; Manager, Operation Risk Management; Head, Financial Crime Control; the Chief Compliance Officer) from each of the 20 DMBs in Nigeria. A standard questionnaire was adapted from 2017/2018 Internal Control Questionnaire and Assessment, Bureau of Financial Monitoring and Accountability Florida Department of Economic Opportunity. It was modified to serve the purpose for which it was meant to serve. It was self-administered to gather data from the 80 respondents at the respective headquarters of the sampled banks at their respective locations across Nigeria. Two likert-scales was used in achieving the stated objectives. A logit regression was used in analysing the stated hypotheses. It was found that effect of monitoring activities using the construct of conduct of ongoing or separate evaluation (COSE), evaluation and communication of deficiencies (ECD) revealed that monitoring activities is significant and positively related to fraud’s detection and prevention in Nigerian DMBS. So also, it was found that control activities using selection and development of control activities (SDCA), selection and development of general controls over technology to prevent financial fraud (SDGCTF), development of control activities that gives room for transparency through procedures that put policies into actions (DCATPPA) contributed to influence fraud detection and prevention in the Nigerian DMBs. In addition, it was found that transparency, accountability, reliability, independence and value relevance have significant effect on fraud detection and prevention ibn Nigerian DMBs. The study concluded that the board of directors demonstrated independence from management and exercises oversight of the development and performance of internal control. Part of the conclusion was that there was accountability on the part of the owners and preparers of the financial reports and that the system gives room for the members of staff to account for their responsibilities. Among the recommendations was that the management of Nigerian DMBs should create and establish a standard Internal Control System strong enough to deter fraud in order to encourage continuity of operations by ensuring liquidity, solvency and going concern of the banks. It was also recommended that the banks create a structure that encourages whistleblowing to complement the internal control system.Keywords: internal control, whistleblowing, deposit money banks, fraud prevention, fraud detection
Procedia PDF Downloads 835317 Complementary Effect of Wistleblowing Policy and Internal Control System on Prevention and Detection of Fraud in Nigerian Deposit Money Banks
Authors: Dada Durojaye Joshua
Abstract:
The study examined the combined effect of internal control system and whistle blowing policy while it pursues the following specific objectives, which are to: examine the relationship between monitoring activities and fraud’s detection and prevention; investigate the effect of control activities on fraud’s detection and prevention in Nigerian Deposit Money Banks (DMBs). The population of the study comprises the 89,275 members of staff in the 20 DMBs in Nigeria as at June 2019. Purposive and convenient sampling techniques were used in the selection of the 80 members of staff at the supervisory level of the Internal Audit Departments of the head offices of the sampled banks, that is, selecting 4 respondents (Audit Executive/Head, Internal Control; Manager, Operation Risk Management; Head, Financial Crime Control; the Chief Compliance Officer) from each of the 20 DMBs in Nigeria. A standard questionnaire was adapted from 2017/2018 Internal Control Questionnaire and Assessment, Bureau of Financial Monitoring and Accountability Florida Department of Economic Opportunity. It was modified to serve the purpose for which it was meant to serve. It was self-administered to gather data from the 80 respondents at the respective headquarters of the sampled banks at their respective locations across Nigeria. Two likert-scales was used in achieving the stated objectives. A logit regression was used in analysing the stated hypotheses. It was found that effect of monitoring activities using the construct of conduct of ongoing or separate evaluation (COSE), evaluation and communication of deficiencies (ECD) revealed that monitoring activities is significant and positively related to fraud’s detection and prevention in Nigerian DMBS. So also, it was found that control activities using selection and development of control activities (SDCA), selection and development of general controls over technology to prevent financial fraud (SDGCTF), development of control activities that gives room for transparency through procedures that put policies into actions (DCATPPA) contributed to influence fraud detection and prevention in the Nigerian DMBs. In addition, it was found that transparency, accountability, reliability, independence and value relevance have significant effect on fraud detection and prevention ibn Nigerian DMBs. The study concluded that the board of directors demonstrated independence from management and exercises oversight of the development and performance of internal control. Part of the conclusion was that there was accountability on the part of the owners and preparers of the financial reports and that the system gives room for the members of staff to account for their responsibilities. Among the recommendations was that the management of Nigerian DMBs should create and establish a standard Internal Control System strong enough to deter fraud in order to encourage continuity of operations by ensuring liquidity, solvency and going concern of the banks. It was also recommended that the banks create a structure that encourages whistleblowing to complement the internal control system.Keywords: internal control, whistleblowing, deposit money banks, fraud prevention, fraud detection
Procedia PDF Downloads 785316 Advances in Medication Reconciliation Tools
Authors: Zixuan Liu, Xin Zhang, Kexin He
Abstract:
In the context of widespread prevalence of multiple diseases, medication safety has become a highly concerned issue affecting patient safety. Medication reconciliation plays a vital role in preventing potential medication risks. However, in medical practice, medication reconciliation faces various challenges, and there is a wide variety of medication reconciliation tools, making the selection of appropriate tools somewhat difficult. The article introduces and analyzes the currently available medication reconciliation tools, providing a reference for healthcare professionals to choose and apply the appropriate medication reconciliation tools.Keywords: patient safety, medication reconciliation, tools, review
Procedia PDF Downloads 825315 Flow Field Optimization for Proton Exchange Membrane Fuel Cells
Authors: Xiao-Dong Wang, Wei-Mon Yan
Abstract:
The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection
Procedia PDF Downloads 302