Search results for: Matlab efficiency simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11000

Search results for: Matlab efficiency simulation

2060 Efficient Implementation of Finite Volume Multi-Resolution Weno Scheme on Adaptive Cartesian Grids

Authors: Yuchen Yang, Zhenming Wang, Jun Zhu, Ning Zhao

Abstract:

An easy-to-implement and robust finite volume multi-resolution Weighted Essentially Non-Oscillatory (WENO) scheme is proposed on adaptive cartesian grids in this paper. Such a multi-resolution WENO scheme is combined with the ghost cell immersed boundary method (IBM) and wall-function technique to solve Navier-Stokes equations. Unlike the k-exact finite volume WENO schemes which involve large amounts of extra storage, repeatedly solving the matrix generated in a least-square method or the process of calculating optimal linear weights on adaptive cartesian grids, the present methodology only adds very small overhead and can be easily implemented in existing edge-based computational fluid dynamics (CFD) codes with minor modifications. Also, the linear weights of this adaptive finite volume multi-resolution WENO scheme can be any positive numbers on condition that their sum is one. It is a way of bypassing the calculation of the optimal linear weights and such a multi-resolution WENO scheme avoids dealing with the negative linear weights on adaptive cartesian grids. Some benchmark viscous problems are numerical solved to show the efficiency and good performance of this adaptive multi-resolution WENO scheme. Compared with a second-order edge-based method, the presented method can be implemented into an adaptive cartesian grid with slight modification for big Reynolds number problems.

Keywords: adaptive mesh refinement method, finite volume multi-resolution WENO scheme, immersed boundary method, wall-function technique.

Procedia PDF Downloads 136
2059 Self-Organizing Maps for Credit Card Fraud Detection and Visualization

Authors: Peng, Chun-Yi, Chen, Wei-Hsuan, Ueng, Shyh-Kuang

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 39
2058 Investigation of the Functional Impact of Amblyopia on Visual Skills in Children

Authors: Chinmay V. Deshpande

Abstract:

Purpose: To assess the efficiency of visual functions and visual skills in strabismic & anisometropic amblyopes and to assess visual acuity and contrast sensitivity in anisometropic amblyopes with spectacles & contact lenses. Method: In a prospective clinical study, 32 children ageing from 5 to 15 years presenting with amblyopia in a pediatric department of Shri Ganapati Netralaya Jalna, India, were assessed for a period of three & half months. Visual acuity was measured with Snellen’s and Bailey-Lovie log MAR charts whereas contrast sensitivity was measured with Pelli-Robson chart with spectacles and contact lenses. Saccadic movements were assessed with SCCO scoring criteria and accommodative facility was checked with ±1.50 DS flippers. Stereopsis was assessed with TNO test. Results: By using Wilcoxon sign rank test p-value < 0.05 (< 0.001), the mean linear visual acuity was 0.29 (≈ 6/21) and mean single optotype visual acuity found to be 0.36 (≈ 6/18). Mean visual acuity of 0.27(≈ 6/21) with spectacles improved to 0.33 (≈ 6/18) with contact lenses in amblyopic eyes. The mean Log MAR visual acuity with spectacles and contact lens were found to be 0.602( ≈6/24) and 0.531(≈ 6/21) respectively. The contrast threshold out of 20 amblyopic eyes shows that mean contrast threshold changed in 9 patients from spectacles 0.27 to contact lens 0.19 respectively. The mean accommodative facility assessed was 5.31(± 2.37). 24 subjects (75%) revealed marked saccadic defects on the test applied. 78% subjects didn’t show even gross stereoscopic ability on TNO test. Conclusion: This study supports the facts about amblyopia and associated deficits in visual skills which are claimed in previous studies. In addition, anisometropic amblyopia can be managed better with contact lenses.

Keywords: strabismus, anisometropia, amblyopia, contrast sensitivity, saccades, stereopsis

Procedia PDF Downloads 405
2057 Study on the Heavy Oil Degradation Performance and Kinetics of Immobilized Bacteria on Modified Zeolite

Authors: Xiao L Dai, Wen X Wei, Shuo Wang, Jia B Li, Yan Wei

Abstract:

Heavy oil pollution generated from both natural and anthropogenic sources could cause significant damages to the ecological environment, due to the toxicity of some of its constituents. Nowadays, microbial remediation is becoming a promising technology to treat oil pollution owing to its low cost and prevention of secondary pollution; microorganisms are key players in the process. Compared to the free microorganisms, immobilized microorganisms possess several advantages, including high metabolic activity rates, strong resistance to toxic chemicals and natural competition with the indigenous microorganisms, and effective resistance to washing away (in open water system). Many immobilized microorganisms have been successfully used for bioremediation of heavy oil pollution. Considering the broad choices, low cost, simple process, large specific surface area and less impact on microbial activity, modified zeolite were selected as a bio-carrier for bacteria immobilization. Three strains of heavy oil-degrading bacteria Bacillus sp. DL-13, Brevibacillus sp. DL-1 and Acinetobacter sp. DL-34 were immobilized on the modified zeolite under mild conditions, and the bacterial load (bacteria /modified zeolite) was 1.12 mg/g, 1.11 mg/g, and 1.13 mg/g, respectively. SEM results showed that the bacteria mainly adsorbed on the surface or punctured in the void of modified zeolite. The heavy oil degradation efficiency of immobilized bacteria was 62.96%, higher than that of the free bacteria (59.83%). The heavy oil degradation process of immobilized bacteria accords with the first-order reaction equation, and the reaction rate constant is 0.1483 d⁻¹, which was significantly higher than the free bacteria (0.1123 d⁻¹), suggesting that the immobilized bacteria can rapidly start up the heavy oil degradation and has a high activity of heavy oil degradation. The results suggested that immobilized bacteria are promising technology for bioremediation of oil pollution.

Keywords: heavy oil pollution, microbial remediation, modified zeolite, immobilized bacteria

Procedia PDF Downloads 131
2056 Preparation of Catalyst-Doped TiO2 Nanotubes by Single Step Anodization and Potential Shock

Authors: Hyeonseok Yoo, Kiseok Oh, Jinsub Choi

Abstract:

Titanium oxide nanotubes have attracted great attention because of its photocatalytic activity and large surface area. For enhancing electrochemical properties, catalysts should be doped into the structure because titanium oxide nanotubes themselves have low electroconductivity and catalytic activity. It has been reported that Ru and Ir doped titanium oxide electrodes exhibit high efficiency and low overpotential in the oxygen evolution reaction (OER) for water splitting. In general, titanium oxide nanotubes with high aspect ratio cannot be easily doped by conventional complex methods. Herein, two types of facile routes, namely single step anodization and potential shock, for Ru doping into high aspect ratio titanium oxide nanotubes are introduced in detail. When single step anodization was carried out, stability of electrodes were increased. However, onset potential was shifted to anodic direction. On the other hand, when high potential shock voltage was applied, a large amount of ruthenium/ruthenium oxides were doped into titanium oxide nanotubes and thick barrier oxide layers were formed simultaneously. Regardless of doping routes, ruthenium/ ruthenium oxides were homogeneously doped into titanium oxide nanotubes. In spite of doping routes, doping in aqueous solution generally led to incorporate high amount of Ru in titanium oxide nanotubes, compared to that in non-aqueous solution. The amounts of doped catalyst were analyzed by X-ray photoelectron spectroscopy (XPS). The optimum condition for water splitting was investigated in terms of the amount of doped Ru and thickness of barrier oxide layer.

Keywords: doping, potential shock, single step anodization, titanium oxide nanotubes

Procedia PDF Downloads 437
2055 Modeling Waiting and Service Time for Patients: A Case Study of Matawale Health Centre, Zomba, Malawi

Authors: Moses Aron, Elias Mwakilama, Jimmy Namangale

Abstract:

Spending more time on long queues for a basic service remains a common challenge to most developing countries, including Malawi. For health sector in particular, Out-Patient Department (OPD) experiences long queues. This puts the lives of patients at risk. However, using queuing analysis to under the nature of the problems and efficiency of service systems, such problems can be abated. Based on a kind of service, literature proposes different possible queuing models. However, unlike using generalized assumed models proposed by literature, use of real time case study data can help in deeper understanding the particular problem model and how such a model can vary from one day to the other and also from each case to another. As such, this study uses data obtained from one urban HC for BP, Pediatric and General OPD cases to investigate an average queuing time for patients within the system. It seeks to highlight the proper queuing model by investigating the kind of distributions functions over patient’s arrival time, inter-arrival time, waiting time and service time. Comparable with the standard set values by WHO, the study found that patients at this HC spend more waiting times than service times. On model investigation, different days presented different models ranging from an assumed M/M/1, M/M/2 to M/Er/2. As such, through sensitivity analysis, in general, a commonly assumed M/M/1 model failed to fit the data but rather an M/Er/2 demonstrated to fit well. An M/Er/3 model seemed to be good in terms of measuring resource utilization, proposing a need to increase medical personnel at this HC. However, an M/Er/4 showed to cause more idleness of human resources.

Keywords: health care, out-patient department, queuing model, sensitivity analysis

Procedia PDF Downloads 418
2054 Effect of Synchronization Protocols on Serum Concentrations of Estrogen and Progesterone in Holstein Dairy Heifers

Authors: K. Shafiei, A. Pirestani, G. Ghalamkari, S. Safavipour

Abstract:

Use of GnRH or its agonists to increase conception rates should be based on an understanding of GnRH-induced biological effects on the reproductive-endocrine system. This effect may occur through GnRH-stimulated LH surge stimulating production of progesterone by corpus luteum.the aim of this study was to compare the effects on reproductive efficiency of a luteolytic dose of a synthetic prostaglandin Cloprostenol Sodium versus ainjectable progesterone and Luliberin- A on Follicle estrogen and progesterone levels.In this study, we used45 head of holstein dairy heifersin the three treatments, with 15 replicates per treatment were performed in random groups. all the heifers before the projects is began in two steps injection 3 mL CloprostenolSodium with an interval of 11 days been synchronized and 10 days later, second injection of prostaglandin was conducted after that we started below protocol:Control group (daily sodium chloride serum injection 1 cc), Group B: Day Zero, intramuscular injection of 15 mg Luliberin- A + every other day injection of 3 cc progesterone + day 7, injection of Cloprostenol Sodium+ day 9, injection of 15 mg Luliberin- A.Group C: similar to Grop B + daily injection of progesterone after that blood samples was collected and centrifuged.plasma were analysed by ELISA.the analysis of this study uses SPSS data software package and compared between the mean and LS Means LSD test at 5% significance level was used.The results of this study shows that maximum of progesterone plasma levels were in the control gruop (P ≥ 0.05).Therefore, daily injection of progesterone inhibit the growth CL. the most estrogen levels in plasma were in Group C (P ≥ 0.05) thus it can be concluded, rise in endogenous estrogen concentrations normally stimulates the preovulatory LH release in heifers.

Keywords: Luliberin- A, Cloprostenol Sodium, estrogen, progesterone, dairy heifers

Procedia PDF Downloads 523
2053 Engineering of Filtration Systems in Egyptian Cement Plants: Industrial Case Study

Authors: Mohamed. A. Saad

Abstract:

The paper represents a case study regarding the conversion of Electro-Static Precipitators (ESP`s) into Fabric Filters (FF). Seven cement production companies were established in Egypt during the period 1927 to 1980 and 6 new companies were established to cope with the increasing cement demand in 1980's. The cement production market shares in Egypt indicate that there are six multinational companies in the local market, they are interested in the environmental conditions improving and so decided to achieve emission reduction project. The experimental work in the present study is divided into two main parts: (I) Measuring Efficiency of Filter Fabrics with detailed description of a designed apparatus. The paper also reveals the factors that should be optimized in order to assist problem diagnosis, solving and increasing the life of bag filters. (II) Methods to mitigate dust emissions in Egyptian cement plants with a special focus on converting the Electrostatic Precipitators (ESP`s) into Fabric Filters (FF) using the same ESP casing, bottom hoppers, dust transportation system, and ESP ductwork. Only the fan system for the higher pressure drop with the fabric filter was replaced. The proper selection of bag material was a prime factor with regard to gas composition, temperature and particle size. Fiberglass with PTFE membrane coated bags was selected. This fabric is rated for a continuous temperature of 250 C and a surge temperature of 280C. The dust emission recorded was less than 20 mg/m3 from the production line fitted with fabric filters which is super compared with the ESP`s working lines stack.

Keywords: Engineering Electrostatic Precipitator, filtration, dust collectors, cement

Procedia PDF Downloads 234
2052 Integer Programming: Domain Transformation in Nurse Scheduling Problem.

Authors: Geetha Baskaran, Andrzej Barjiela, Rong Qu

Abstract:

Motivation: Nurse scheduling is a complex combinatorial optimization problem. It is also known as NP-hard. It needs an efficient re-scheduling to minimize some trade-off of the measures of violation by reducing selected constraints to soft constraints with measurements of their violations. Problem Statement: In this paper, we extend our novel approach to solve the nurse scheduling problem by transforming it through Information Granulation. Approach: This approach satisfies the rules of a typical hospital environment based on a standard benchmark problem. Generating good work schedules has a great influence on nurses' working conditions which are strongly related to the level of a quality health care. Domain transformation that combines the strengths of operation research and artificial intelligence was proposed for the solution of the problem. Compared to conventional methods, our approach involves judicious grouping (information granulation) of shifts types’ that transforms the original problem into a smaller solution domain. Later these schedules from the smaller problem domain are converted back into the original problem domain by taking into account the constraints that could not be represented in the smaller domain. An Integer Programming (IP) package is used to solve the transformed scheduling problem by expending the branch and bound algorithm. We have used the GNU Octave for Windows to solve this problem. Results: The scheduling problem has been solved in the proposed formalism resulting in a high quality schedule. Conclusion: Domain transformation represents departure from a conventional one-shift-at-a-time scheduling approach. It offers an advantage of efficient and easily understandable solutions as well as offering deterministic reproducibility of the results. We note, however, that it does not guarantee the global optimum.

Keywords: domain transformation, nurse scheduling, information granulation, artificial intelligence, simulation

Procedia PDF Downloads 377
2051 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights

Authors: Julian Wise

Abstract:

Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.

Keywords: mineral technology, big data, machine learning operations, data lake

Procedia PDF Downloads 94
2050 Fluidised Bed Gasification of Multiple Agricultural Biomass-Derived Briquettes

Authors: Rukayya Ibrahim Muazu, Aiduan Li Borrion, Julia A. Stegemann

Abstract:

Biomass briquette gasification is regarded as a promising route for efficient briquette use in energy generation, fuels and other useful chemicals, however, previous research work has focused on briquette gasification in fixed bed gasifiers such as updraft and downdraft gasifiers. Fluidised bed gasifier has the potential to be effectively sized for medium or large scale. This study investigated the use of fuel briquettes produced from blends of rice husks and corn cobs biomass residues, in a bubbling fluidised bed gasifier. The study adopted a combination of numerical equations and Aspen Plus simulation software to predict the product gas (syngas) composition based on briquette's density and biomass composition (blend ratio of rice husks to corn cobs). The Aspen Plus model was based on an experimentally validated model from the literature. The results based on a briquette size of 32 mm diameter and relaxed density range of 500 to 650 kg/m3 indicated that fluidisation air required in the gasifier increased with an increase in briquette density, and the fluidisation air showed to be the controlling factor compared with the actual air required for gasification of the biomass briquettes. The mass flowrate of CO2 in the predicted syngas composition, increased with an increase in the air flow rate, while CO production decreased and H2 was almost constant. The H2/CO ratio for various blends of rice husks and corn cobs did not significantly change at the designed process air, but a significant difference of 1.0 for H2/CO ratio was observed at higher air flow rate, and between 10/90 to 90/10 blend ratio of rice husks to corn cobs. This implies the need for further understanding of biomass variability and hydrodynamic parameters on syngas composition in biomass briquette gasification.

Keywords: aspen plus, briquettes, fluidised bed, gasification, syngas

Procedia PDF Downloads 436
2049 Reinforcement-Learning Based Handover Optimization for Cellular Unmanned Aerial Vehicles Connectivity

Authors: Mahmoud Almasri, Xavier Marjou, Fanny Parzysz

Abstract:

The demand for services provided by Unmanned Aerial Vehicles (UAVs) is increasing pervasively across several sectors including potential public safety, economic, and delivery services. As the number of applications using UAVs grows rapidly, more and more powerful, quality of service, and power efficient computing units are necessary. Recently, cellular technology draws more attention to connectivity that can ensure reliable and flexible communications services for UAVs. In cellular technology, flying with a high speed and altitude is subject to several key challenges, such as frequent handovers (HOs), high interference levels, connectivity coverage holes, etc. Additional HOs may lead to “ping-pong” between the UAVs and the serving cells resulting in a decrease of the quality of service and energy consumption. In order to optimize the number of HOs, we develop in this paper a Q-learning-based algorithm. While existing works focus on adjusting the number of HOs in a static network topology, we take into account the impact of cells deployment for three different simulation scenarios (Rural, Semi-rural and Urban areas). We also consider the impact of the decision distance, where the drone has the choice to make a switching decision on the number of HOs. Our results show that a Q-learning-based algorithm allows to significantly reduce the average number of HOs compared to a baseline case where the drone always selects the cell with the highest received signal. Moreover, we also propose which hyper-parameters have the largest impact on the number of HOs in the three tested environments, i.e. Rural, Semi-rural, or Urban.

Keywords: drones connectivity, reinforcement learning, handovers optimization, decision distance

Procedia PDF Downloads 87
2048 Numerical Evaluation of Deep Ground Settlement Induced by Groundwater Changes During Pumping and Recovery Test in Shanghai

Authors: Shuo Wang

Abstract:

The hydrogeological parameters of the engineering site and the hydraulic connection between the aquifers can be obtained by the pumping test. Through the recovery test, the characteristics of water level recovery and the law of surface subsidence recovery can be understood. The above two tests can provide the basis for subsequent engineering design. At present, the deformation of deep soil caused by pumping tests is often neglected. However, some studies have shown that the maximum settlement subject to groundwater drawdown is not necessarily on the surface but in the deep soil. In addition, the law of settlement recovery of each soil layer subject to water level recovery is not clear. If the deformation-sensitive structure is deep in the test site, safety accidents may occur. In this study, the pumping test and recovery test of a confined aquifer in Shanghai are introduced. The law of measured groundwater changes and surface subsidence are analyzed. In addition, the fluid-solid coupling model was established by ABAQUS based on the Biot consolidation theory. The models are verified by comparing the computed and measured results. Further, the variation law of water level and the deformation law of deep soil during pumping and recovery tests under different site conditions and different times and spaces are discussed through the above model. It is found that the maximum soil settlement caused by pumping in a confined aquifer is related to the permeability of the overlying aquitard and pumping time. There is a lag between soil deformation and groundwater changes, and the recovery rate of settlement deformation of each soil layer caused by the rise of water level is different. Finally, some possible research directions are proposed to provide new ideas for academic research in this field.

Keywords: coupled hydro-mechanical analysis, deep ground settlement, numerical simulation, pumping test, recovery test

Procedia PDF Downloads 27
2047 Incidence and Causes of Elective Surgery Cancellations in Songklanagarind Hospital, Thailand

Authors: A. Kaeotawee, N. Bunmas, W. Chomthong

Abstract:

Background: The cancellation of elective surgery is a major indicator of poor operating room efficiency. Furthermore, it is recognized as a major cause of emotional trauma to patients as well as their families. This study was carried out to assess the incidence and causes of elective surgery cancellation in our setting and to find the appropriate solutions for better quality management. Objective: To determine the incidence and causes of elective surgery cancellations in Songklanagarind Hospital. Material and Method: A prospective survey was conducted from September to November 2012. All patients who had their scheduled elective operations cancelled were assessed. Data was collected on the following 2 components: (1) patient demographics;(2) main reasons for cancellations, which were grouped into patient-related factors and organizational-related factors. Data are reported as a percentage of patients whose operations were cancelled. The association between cancellation status and patient demographics was assessed using univariate logistic regression. Results: 2,395 patients were scheduled for elective surgery and of these 343 (14.3%) had their operations cancelled. Cardiothoracic surgery had the highest rate of cancellations (28.7%) while the least number of cancellations occurred in ophthalmology (10.1%). The main reasons for cancellations were related to the unit's organization (53.6%), due to the surgeon (48.4%). Patient related causes (46.4%), due to non medical reasons (32.1%). The most common cause of cancellation by the surgeon was lack of theater time (21.3%), by patients due to the patient’s nonappearance (25.1%). Cancellation was significantly associated with type of patient, health insurance, type of anesthesia and specialties (p<0.05). Conclusion: Surgery cancellations by surgeons relating to a lack of theater time was a significant problem in our setting. Appropriate solutions for better quality improvement are needed.

Keywords: elective cases, surgery cancellation, quality management, appropriate solutions

Procedia PDF Downloads 243
2046 Use of Acid Mine Drainage as a Source of Iron to Initiate the Solar Photo-Fenton Treatment of Municipal Wastewater: Circular Economy Effect

Authors: Tooba Aslam, Efthalia Chatzisymeon

Abstract:

Untreated Municipal Wastewater (MWW) is renowned as the utmost harmful pollution caused to environmental water due to the high presence of nutrients and organic contaminants. Removal of Chemical Oxygen Demand (COD) from synthetic as well as municipal wastewater is investigated by using acid mine drainage as a source of iron to initiate the solar photo-Fenton treatment of municipal wastewater. In this study, Acid Mine Drainage (AMD) and different minerals enriched in iron, such as goethite, hematite, magnetite, and magnesite, have been used as the source of iron to initiate the photo-Fenton process. Co-treatment of real municipal wastewater and acid mine drainage /minerals is widely examined. The effects of different parameters such as minerals recovery from AMD, AMD as a source of iron, H₂O₂ concentration, and COD concentrations on the COD percentage removal of the process are studied. The results show that, out of all the four minerals, only hematite (1g/L) could remove 30% of the pollutants at about 100 minutes and 1000 ppm of H₂O₂. The addition of AMD as a source of iron is performed and compared with both synthetic as well as real wastewater from South Africa under the same conditions, i.e., 1000 ppm of H₂O₂, ambient temperature, 2.8 pH, and solar simulator. In the case of synthetic wastewater, the maximum removal (56%) is achieved with 50 ppm of iron (AMD source) at 160 minutes. On the other hand, in real wastewater, the removal efficiency is 99% with 30 ppm of iron at 90 minutes and 96% with 50 ppm of iron at 120 minutes. In conclusion, overall, the co-treatment of AMD and MWW by solar photo-Fenton treatment appears to be an effective and promising method to remove organic materials from Municipal wastewater.

Keywords: municipal wastewater treatment, acid mine drainage, co-treatment, COD removal, solar photo-Fenton, circular economy

Procedia PDF Downloads 72
2045 Cannabidiol (CBD) Resistant Salmonella Strains Are Susceptible to Epsilon 34 Phage Tailspike Protein

Authors: Ibrahim Iddrisu, Joseph Ayariga, Junhuan Xu, Ayomide Adebanjo, Boakai K. Robertson, Michelle Samuel-Foo, Olufemi Ajayi

Abstract:

The rise of antimicrobial resistance is a global public health crisis that threatens the effective control and prevention of infections. Due to the emergence of pan drug-resistant bacteria, most antibiotics have lost their efficacy. Bacteriophages or their components are known to target bacterial cell walls, cell membranes, and lipopolysaccharides (LPS) and hydrolyze them. Bacteriophages, being the natural predators of pathogenic bacteria, are inevitably categorized as ‘human friends’, thus fulfilling the adage that ‘the enemy of my enemy is my friend’. Leveraging on their lethal capabilities against pathogenic bacteria, researchers are searching for more ways to overcome the current antibiotic resistance challenge. In this study, we expressed and purified epsilon 34 phage tail spike protein (E34 TSP) from the E34 TSP gene, then assessed the ability of this bacteriophage protein in the killing of two CBD-resistant strains of Salmonella spp. We also assessed the ability of the tail spike protein to cause bacteria membrane disruption and dehydrogenase depletion. We observed that the combined treatment of CBD-resistant strains of Salmonella with CBD and E34 TSP showed poor killing ability, whereas the mono treatment with E34 TSP showed considerably higher killing efficiency. This study demonstrates that the inhibition of the bacteria by E34 TSP was due in part to membrane disruption and dehydrogenase inactivation by the protein. The results of this work provide an interesting background to highlight the crucial role phage proteins such as E34 TSP could play in pathogenic bacterial control.

Keywords: cannabidiol, resistance, Salmonella, antimicrobials, phages

Procedia PDF Downloads 43
2044 Impact of Out-Of-Pocket Payments on Health Care Finance and Access to Health Care Services: The Case of Health Transformation Program in Turkey

Authors: Bengi Demirci

Abstract:

Out-of-pocket payments have become one of the common models adopted by health care reforms all over the world, and they have serious implications for not only the financial set-up of the health care systems in question but also for the people involved in terms of their access to the health care services provided. On the one hand, out-of-pocket payments are used in raising resources for the finance of the health care system and in decreasing non-essential health care expenses by having a deterrent role on the patients. On the other hand, out-of-pocket payment model causes regressive distribution effect by putting more burdens on the lower income groups and making them refrain from using health care services. Being a relatively incipient country having adopted the out-of-pocket payment model within the context of its Health Transformation Program which has been ongoing since the early 2000s, Turkey provides a good case for re-evaluating the pros and cons of this model in order not to sacrifice equality in access to health care for raising revenue for health care finance and vice versa. Therefore this study aims at analyzing the impact of out-of-pocket payments on the health finance system itself and on the patients’ access to healthcare services in Turkey where out-of-pocket payment model has been in use for a while. In so doing, data showing the revenue obtained from out-of-pocket payments and their share in health care finance are analyzed. In addition to this, data showing the change in the amount of expenditure made by patients on health care services after the adoption of out-of-pocket payments and the change in the use of various health care services in the meanwhile are examined. It is important for the incipient countries like Turkey to be careful in striking the right balance between the objective of cost efficiency and that of equality in accessing health care services while adopting the out-of-pocket payment model.

Keywords: health care access, health care finance, health reform, out-of-pocket payments

Procedia PDF Downloads 357
2043 Development of Method for Recovery of Nickel from Aqueous Solution Using 2-Hydroxy-5-Nonyl- Acetophenone Oxime Impregnated on Activated Charcoal

Authors: A. O. Adebayo, G. A. Idowu, F. Odegbemi

Abstract:

Investigations on the recovery of nickel from aqueous solution using 2-hydroxy-5-nonyl- acetophenone oxime (LIX-84I) impregnated on activated charcoal was carried out. The LIX-84I was impregnated onto the pores of dried activated charcoal by dry method and optimum conditions for different equilibrium parameters (pH, adsorbent dosage, extractant concentration, agitation time and temperature) were determined using a simulated solution of nickel. The kinetics and adsorption isotherm studies were also evaluated. It was observed that the efficiency of recovery with LIX-84I impregnated on charcoal was dependent on the pH of the aqueous solution as there was little or no recovery at pH below 4. However, as the pH was raised, percentage recovery increases and peaked at pH 5.0. The recovery was found to increase with temperature up to 60ºC. Also it was observed that nickel adsorbed onto the loaded charcoal best at a lower concentration (0.1M) of the extractant when compared with higher concentrations. Similarly, a moderately low dosage (1 g) of the adsorbent showed better recovery than larger dosages. These optimum conditions were used to recover nickel from the leachate of Ni-MH batteries dissolved with sulphuric acid, and a 99.6% recovery was attained. Adsorption isotherm studies showed that the equilibrium data fitted best to Temkin model, with a negative value of constant, b (-1.017 J/mol) and a high correlation coefficient, R² of 0.9913. Kinetic studies showed that the adsorption process followed a pseudo-second order model. Thermodynamic parameter values (∆G⁰, ∆H⁰, and ∆S⁰) showed that the adsorption was endothermic and spontaneous. The impregnated charcoal appreciably recovered nickel using a relatively smaller volume of extractant than what is required in solvent extraction. Desorption studies showed that the loaded charcoal is reusable for three times, and so might be economical for nickel recovery from waste battery.

Keywords: charcoal, impregnated, LIX-84I, nickel, recovery

Procedia PDF Downloads 131
2042 Quality Management of Drinking Water Purification Process in the 15-Liter Container Using Design of Experiment and Process Capability Analysis

Authors: Chanchai Wimon, Polin Muangngam, Thannapat Nimsumram, Chanin Prombutra, Prasert Aengchuan, Perawat Boonpuek

Abstract:

Cleaning water containers is essential for drinking water production to prevent contamination and residual chemicals from washing liquid. Water distribution divisions in Thailand are facing a critical problem with residual contamination in 15-liter drinking water containers due to dust and residual chemicals (TDS value > 200) after normal washing. A thorough washing process is required before filling the purified water into each container. Unfortunately, the washing procedure and frequency do not align with the work instructions provided by the health department. The measured Total Dissolved Solids (TDS) value of the remaining water was found to range between 195–202, exceeding the standard TDS for excellent drinking water (50-190 ppm). This research uses the design of experiment technique in statistics to improve the washing process and reduce such contamination. Statistical data from our survey of the cleaning process is collected to identify affecting factors. Washing time and water volume are varied to test the efficiency of the washing process in reducing residual sediment in the water. The result indicates that cleaning the 15-liter container with 2 liters of tap water mixed with 15 milliliters of dishwashing liquid for 3.12 minutes per container is optimal, as the resulting TDS reduces to 189.75, falling within the standard value for good drinking water. This study result would benefit the drinking water industry in implementing a statistically evaluated cleaning procedure without conducting multiple trials, thus saving takt time and production costs.

Keywords: design of experiment, drinking water purification, quality management, production process reliability

Procedia PDF Downloads 33
2041 Effectiveness and Efficiency of Unified Philippines Accident Reporting and Database System in Optimizing Road Crash Data Usage with Various Stakeholders

Authors: Farhad Arian Far, Anjanette Q. Eleazar, Francis Aldrine A. Uy, Mary Joyce Anne V. Uy

Abstract:

The Unified Philippine Accident Reporting and Database System (UPARDS), is a newly developed system by Dr. Francis Aldrine Uy of the Mapua Institute of Technology. The main purpose is to provide an advanced road accident investigation tool, record keeping and analysis system for stakeholders such as Philippine National Police (PNP), Metro Manila Development Authority (MMDA), Department of Public Works and Highways (DPWH), Department of Health (DOH), and insurance companies. The system is composed of 2 components, the mobile application for road accident investigators that takes advantage of available technology to advance data gathering and the web application that integrates all accident data for the use of all stakeholders. The researchers with the cooperation of PNP’s Vehicle Traffic Investigation Sector of the City of Manila, conducted the field-testing of the application in fifteen (15) accident cases. Simultaneously, the researchers also distributed surveys to PNP, Manila Doctors Hospital, and Charter Ping An Insurance Company to gather their insights regarding the web application. The survey was designed on information systems theory called Technology Acceptance Model. The results of the surveys revealed that the respondents were greatly satisfied with the visualization and functions of the applications as it proved to be effective and far more efficient in comparison with the conventional pen-and-paper method. In conclusion, the pilot study was able to address the need for improvement of the current system.

Keywords: accident, database, investigation, mobile application, pilot testing

Procedia PDF Downloads 420
2040 Life Cycle Assessment of Residential Buildings: A Case Study in Canada

Authors: Venkatesh Kumar, Kasun Hewage, Rehan Sadiq

Abstract:

Residential buildings consume significant amounts of energy and produce a large amount of emissions and waste. However, there is a substantial potential for energy savings in this sector which needs to be evaluated over the life cycle of residential buildings. Life Cycle Assessment (LCA) methodology has been employed to study the primary energy uses and associated environmental impacts of different phases (i.e., product, construction, use, end of life, and beyond building life) for residential buildings. Four different alternatives of residential buildings in Vancouver (BC, Canada) with a 50-year lifespan have been evaluated, including High Rise Apartment (HRA), Low Rise Apartment (LRA), Single family Attached House (SAH), and Single family Detached House (SDH). Life cycle performance of the buildings is evaluated for embodied energy, embodied environmental impacts, operational energy, operational environmental impacts, total life-cycle energy, and total life cycle environmental impacts. Estimation of operational energy and LCA are performed using DesignBuilder software and Athena Impact estimator software respectively. The study results revealed that over the life span of the buildings, the relationship between the energy use and the environmental impacts are identical. LRA is found to be the best alternative in terms of embodied energy use and embodied environmental impacts; while, HRA showed the best life-cycle performance in terms of minimum energy use and environmental impacts. Sensitivity analysis has also been carried out to study the influence of building service lifespan over 50, 75, and 100 years on the relative significance of embodied energy and total life cycle energy. The life-cycle energy requirements for SDH is found to be a significant component among the four types of residential buildings. The overall disclose that the primary operations of these buildings accounts for 90% of the total life cycle energy which far outweighs minor differences in embodied effects between the buildings.

Keywords: building simulation, environmental impacts, life cycle assessment, life cycle energy analysis, residential buildings

Procedia PDF Downloads 448
2039 Research on the Aeration Systems’ Efficiency of a Lab-Scale Wastewater Treatment Plant

Authors: Oliver Marunțălu, Elena Elisabeta Manea, Lăcrămioara Diana Robescu, Mihai Necșoiu, Gheorghe Lăzăroiu, Dana Andreya Bondrea

Abstract:

In order to obtain efficient pollutants removal in small-scale wastewater treatment plants, uniform water flow has to be achieved. The experimental setup, designed for treating high-load wastewater (leachate), consists of two aerobic biological reactors and a lamellar settler. Both biological tanks were aerated by using three different types of aeration systems - perforated pipes, membrane air diffusers and tube ceramic diffusers. The possibility of homogenizing the water mass with each of the air diffusion systems was evaluated comparatively. The oxygen concentration was determined by optical sensors with data logging. The experimental data was analyzed comparatively for all three different air dispersion systems aiming to identify the oxygen concentration variation during different operational conditions. The Oxygenation Capacity was calculated for each of the three systems and used as performance and selection parameter. The global mass transfer coefficients were also evaluated as important tools in designing the aeration system. Even though using the tubular porous diffusers leads to higher oxygen concentration compared to the perforated pipe system (which provides medium-sized bubbles in the aqueous solution), it doesn’t achieve the threshold limit of 80% oxygen saturation in less than 30 minutes. The study has shown that the optimal solution for the studied configuration was the radial air diffusers which ensure an oxygen saturation of 80% in 20 minutes. An increment of the values was identified when the air flow was increased.

Keywords: flow, aeration, bioreactor, oxygen concentration

Procedia PDF Downloads 372
2038 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models

Authors: Azadeh Jafari, Robert G. Owens

Abstract:

In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.

Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics

Procedia PDF Downloads 345
2037 Genotypic Response Differences among Faba Bean Accessions under Regular Deficit Irrigation (RDI)

Authors: M. Afzal, Salem Safer Alghamdi, Awais Ahmad

Abstract:

Limited amount of irrigation water is an alarming threat to arid and semiarid agriculture. However, genotypic response differences to water deficit conditions within species have been reported frequently. Present study was conducted in order to measure the genotypic differences among faba bean accessions under Regular Deficit Irrigation (RDI). Five seeds from each accession were sown in 135 silt filled pots (30 x 24 cm). Experiment was planned under split plot arrangement and replicated thrice. Treatments consisted of three RDI levels (100% (control), 60% and 40% of the field capacity) and fifteen faba bean accessions (two local accessions as reference while thirteen from different sources around the world). Irrigation treatment was started from the very first day of sowing. Plant height, shoot dry weight, stomatal conductance and total chlorophyll contents (SPAD reading) were measured one month after germination. Irrigation, faba bean accessions and the all possible interactions has stood significantly high for all studied parameters. Regular deficient irrigation has hampered the plant growth and associated parameters in decreasing order (100% < 60% < 40%). Accessions have responded differently under regular deficient irrigation and some of them are even better than local accession. A highly significant correlation among all parameters has also been observed. It was concluded from results that above parameters could be used as markers to identify the genotypic differences for water deficit stress response. This outcome encouraged the use of superior faba bean genotypes in breeding programs for improved varieties to enhance water use efficiency under stress conditions.

Keywords: accessions, stomatal conductance, total chlorophyll contents, RDI, regular deficient irrigation

Procedia PDF Downloads 274
2036 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network

Authors: Z. Abdollahi Biron, P. Pisu

Abstract:

Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.

Keywords: fault diagnostics, communication network, connected vehicles, packet drop out, platoon

Procedia PDF Downloads 222
2035 The Increase in Functionalities of King Oyster Mushroom (Pleurotus eryngii) Mycelia Depending on the Increase in Nutritional Components

Authors: Hye-Sung Park, Eun-Ji Lee, Chan-Jung Lee, Won-Sik Kong

Abstract:

This study was conducted to research king oyster mushroom (Pleurotus eryngii) mycelia with reinforced functionalities. 0 to 4% of saccharide components, such as glucose (glu), lactose (lac), mannitol (man), xylose (xyl), and fructose (fru) and 0 to 0.04% of amino acid components, such as aspartic acid (asp). Cysteine (cys), threonine (thr), glutamine (gln), and serine (ser) were added to liquid media, and antioxidant activities, nitrite scavenging activities, and total polyphenol contents of the cultured mycelia were measured. In the saccharide-added group, 4 strains except ASI 2887 had high antioxidant activities when 1% of xyl was added and especially, the antioxidant activity of ASI 2839 was 73.9%, which was the highest value. In the amino acid-added group, the antioxidant activity of ASI 2839 was 66.3% that was the highest value when 0.2% of ser was added. But all the 5 strains had lower antioxidant activities than the saccharide-added group overall. In the saccharide-added group, 4 strains except ASI 2887 had higher nitrite scavenging activities than other group when 1% of xyl was added and especially, the nitrite scavenging activity of ASI 2824 was 57.8% that was the highest value. It was revealed that the saccharide-added group and the amino acid-added group had a similar efficiency of nitrite scavenging activity. Although the same component-added group did not show a certain increase or decrease in total polyphenol contents, ASI 2839 with the highest antioxidant activity had 6.8mg/g, which was the highest content when 1% of xyl was added. In conclusion, this study demonstrated that when 1% of xyl was added, functionalities of Pleurotus eryngii mycelia, including antioxidant activities, nitrite scavenging activities, and total polyphenol contents improved.

Keywords: king oyster mushroom, saccharide, amino acid, mycelia

Procedia PDF Downloads 135
2034 Role and Impact of Artificial Intelligence in Sales and Distribution Management

Authors: Kiran Nair, Jincy George, Suhaib Anagreh

Abstract:

Artificial intelligence (AI) in a marketing context is a form of a deterministic tool designed to optimize and enhance marketing tasks, research tools, and techniques. It is on the verge of transforming marketing roles and revolutionize the entire industry. This paper aims to explore the current dissemination of the application of artificial intelligence (AI) in the marketing mix, reviewing the scope and application of AI in various aspects of sales and distribution management. The paper also aims at identifying the areas of the strong impact of AI in factors of sales and distribution management such as distribution channel, purchase automation, customer service, merchandising automation, and shopping experiences. This is a qualitative research paper that aims to examine the impact of AI on sales and distribution management of 30 multinational brands in six different industries, namely: airline; automobile; banking and insurance; education; information technology; retail and telecom. Primary data is collected by means of interviews and questionnaires from a sample of 100 marketing managers that have been selected using convenient sampling method. The data is then analyzed using descriptive statistics, correlation analysis and multiple regression analysis. The study reveals that AI applications are extensively used in sales and distribution management, with a strong impact on various factors such as identifying new distribution channels, automation in merchandising, customer service, and purchase automation as well as sales processes. International brands have already integrated AI extensively in their day-to-day operations for better efficiency and improved market share while others are investing heavily in new AI applications for gaining competitive advantage.

Keywords: artificial intelligence, sales and distribution, marketing mix, distribution channel, customer service

Procedia PDF Downloads 133
2033 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 243
2032 Partially Phosphorylated Polyvinyl Phosphate-PPVP Composite: Synthesis and Its Potentiality for Zr (IV) Extraction from an Acidic Medium

Authors: Khaled Alshamari

Abstract:

Synthesized partially phosphorylated polyvinyl phosphate derivative (PPVP) was functionalized to extract Zirconium (IV) from Egyptian zircon sand. The specifications for the PPVP composite were approved effectively via different techniques, namely, FT-IR, XPS, BET, EDX, TGA, HNMR, C-NMR, GC-MS, XRD and ICP-OES analyses, which demonstrated a satisfactory synthesis of PPVP and zircon dissolution from Egyptian zircon sand. Factors controlling parameters, such as pH values, shaking time, initial zirconium concentration, PPVP dose, nitrate ions concentration, co-ions, temperature and eluting agents, have been optimized. At 25 ◦C, pH 0, 20 min shaking, 0.05 mol/L zirconium ions and 0.5 mol/L nitrate ions, PPVP has an exciting preservation potential of 195 mg/g, equivalent to 390 mg/L zirconium ions. From the extraction–distribution isotherm, the practical outcomes of Langmuir’s modeling are better than the Freundlich model, with a theoretical value of 196.07 mg/g, which is more in line with the experimental results of 195 mg/g. The zirconium ions adsorption onto the PPVP composite follows the pseudo-second-order kinetics with a theoretical capacity value of 204.08 mg/g. According to thermodynamic potential, the extraction process was expected to be an exothermic, spontaneous and beneficial extraction at low temperatures. The thermodynamic parameters ∆S (−0.03 kJ/mol), ∆H (−12.22 kJ/mol) and ∆G were also considered. As the temperature grows, ∆G values increase from −2.948 kJ/mol at 298 K to −1.941 kJ/mol at 338 K. Zirconium ions may be eluted from the working loaded PPVP by 0.025M HNO₃, with a 99% efficiency rate. It was found that zirconium ions revealed good separation factors towards some co-ions such as Hf⁴+ (28.82), Fe³+ (10.64), Ti⁴+ (28.82), V⁵+ (86.46) and U⁶+ (68.17). A successful alkali fusion technique with NaOH flux followed by the extraction with PPVP is used to obtain a high-purity zirconia concentrate with a zircon content of 72.77 % and a purity of 98.29%. As a result of this, the improved factors could finally be used.

Keywords: zirconium extraction, partially phosphorylated polyvinyl phosphate (PPVP), acidic medium, zircon

Procedia PDF Downloads 43
2031 Service Flow in Multilayer Networks: A Method for Evaluating the Layout of Urban Medical Resources

Authors: Guanglin Song

Abstract:

(Objective) Situated within the context of China's tiered medical treatment system, this study aims to analyze spatial causes of urban healthcare access difficulties from the perspective of the configuration of healthcare facilities. (Methods) A social network analysis approach is employed to construct a healthcare demand and supply flow network between major residential clusters and various tiers of hospitals in the city.(Conclusion) The findings reveal that:1.there exists overall maldistribution and over-concentration of healthcare resources in Study Area, characterized by structural imbalance; 2.the low rate of primary care utilization in Study Area is a key factor contributing to congestion at higher-tier hospitals, as excessive reliance on these institutions by neighboring communities exacerbates the problem; 3.gradual optimization of the healthcare facility layout in Study Area, encompassing holistic, local, and individual institutional levels, can enhance systemic efficiency and resource balance.(Prospects) This research proposes a method for evaluating urban healthcare resource distribution structures based on service flows within hierarchical networks. It offers spatially targeted optimization suggestions for promoting the implementation of the tiered healthcare system and alleviating challenges related to accessibility and congestion in seeking medical care. Provide some new ideas for researchers and healthcare managers in countries, cities, and healthcare management around the world with similar challenges.

Keywords: flow of public services, urban networks, healthcare facilities, spatial planning, urban networks

Procedia PDF Downloads 39