Search results for: transversal pendant domination number
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10199

Search results for: transversal pendant domination number

3569 Global Healthcare Village Based on Mobile Cloud Computing

Authors: Laleh Boroumand, Muhammad Shiraz, Abdullah Gani, Rashid Hafeez Khokhar

Abstract:

Cloud computing being the use of hardware and software that are delivered as a service over a network has its application in the area of health care. Due to the emergency cases reported in most of the medical centers, prompt for an efficient scheme to make health data available with less response time. To this end, we propose a mobile global healthcare village (MGHV) model that combines the components of three deployment model which include country, continent and global health cloud to help in solving the problem mentioned above. In the creation of continent model, two (2) data centers are created of which one is local and the other is global. The local replay the request of residence within the continent, whereas the global replay the requirements of others. With the methods adopted, there is an assurance of the availability of relevant medical data to patients, specialists, and emergency staffs regardless of locations and time. From our intensive experiment using the simulation approach, it was observed that, broker policy scheme with respect to optimized response time, yields a very good performance in terms of reduction in response time. Though, our results are comparable to others when there is an increase in the number of virtual machines (80-640 virtual machines). The proportionality in increase of response time is within 9%. The results gotten from our simulation experiments shows that utilizing MGHV leads to the reduction of health care expenditures and helps in solving the problems of unqualified medical staffs faced by both developed and developing countries.

Keywords: cloud computing (MCC), e-healthcare, availability, response time, service broker policy

Procedia PDF Downloads 377
3568 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 202
3567 Shared Heart with a Common Atrial Complex and Persistent Right Dorsal Aorta in Conjoined Twins

Authors: L. C. Prasanna, Antony Sylvan D’Souza, Kumar M. R. Bhat

Abstract:

Although life as a conjoined twin would seem intolerable, there has recently been an increased interest in this subject because of the increasing number of cases where attempts have been made to separate them surgically. We have reviewed articles on cardiovascular anomalies in conjoined twins and presenting rarest anomaly in dicephalus parapagus fetus having two heads attached to one body from the neck or upper chest downwards, with a pair of limbs and a set of reproductive organs. Both the twins shared a common thoracic cavity with a single sternum. When the thoracic cavity was opened, a common anterior mediastinum was found. On opening the pericardium, two separate, closely apposed hearts were exposed. The two cardia are placed side by side. The left heart was slightly larger than the right and were joined at the atrial levels. Four atrial appendages were present, two for each twin. The atrial complex was a common chamber posterior to the ventricles. A single large tributary which could be taken as inferior vena cava drains into the common atrial chamber. In this case, the heart could not be assigned to either twin and therefore, it is referred to as the shared heart within a common pericardial sac. The right and left descending thoracic aorta have joined with each other just above the diaphragm to form a common descending thoracic aorta which has an opening in the diaphragm to be continued as common abdominal aorta which has a normal branching pattern. Upon an interior dissection, it is observed that the two atria have a wide communication which could be a wide patent foramen ovale and this common atrial cavity has a communication with a remnant of a possible common sinus venosus.

Keywords: atrium, congenital anomaly, conjoined twin, sinus venosus

Procedia PDF Downloads 394
3566 Determinants of Selenium Intake in a High HIV Prevalence Fishing Community in Bondo District, Kenya

Authors: Samwel Boaz Otieno, Fred Were, Ephantus Kabiru, Kaunda Waza

Abstract:

A study was done to establish determinants of selenium intake in a high HIV prevalence fishing community in the Pala Bondo district, Kenya. It was established that most of the respondents (61%) were small holder Farmers and Fishermen {χ2 (1, N=386) p<0.000}, and that most of them (91.2%) had up to college level education {χ2.(1, N=386) p<0.000}, while the number of males and females were not significantly different {χ (1, N=386) p=0.263} and 83.5% of respondents were married {χ2 (1, N=386) p=0.000}. The study showed that adults take on average 2.68 meals a day (N=382, SD=0.603), while children take 3.02 meals (N=386, SD=1.031) a day, and that in most households (82.6%) food is prepared by the women {χ2 (1, N=386) p=0.000} and further that 50% of foods eaten in that community are purchased {χ2 (1, N=386)=0.1818, p=0.6698}. The foods eaten by 75.2% of the respondents were Oreochromis niloticus, Lates niloticus, and Sorghum bicolour, 64.1% vegetables and that both children and adults eat same types of food, and further that traditional foods which have become extinct are mainly vegetables (46%). The study established that selenium levels in foods eaten in Pala sub-locations varies with traditional vegetables having higher levels of selenium; for example, Laurnea cornuta (148.5 mg/kg), Cleome gynandra (121.5 mg/kg), Vignia ungulata (21.97 mg/kg), while Rastrineobola argentea (51 mg/kg), Lates niloticus (0), Oreochromis niloticus (0) Sorgum bicolour (19.97 mg/kg), and Sorgum bicolour (0). The study showed that there is an inverse relationship between foods eaten and selenium levels {RR=1.21, p=0.000}, with foods eaten by 75.2% of respondents (Oreochromis niloticus/Lates niloticus) having no detectable selenium. The four soil types identified in the study area had varying selenium levels with pleat loam (13.3 mg/kg), sandy loam (10.7 mg/kg), clay (2.8 mg/kg) and loam (4.8 mg/kg). It was concluded from this study that for the foods eaten by most of the respondents the selenium levels were below Daily Reference Intake.

Keywords: determinants, HIV, food, fishing, Selenium

Procedia PDF Downloads 260
3565 Mapping of Geological Structures Using Aerial Photography

Authors: Ankit Sharma, Mudit Sachan, Anurag Prakash

Abstract:

Rapid growth in data acquisition technologies through drones, have led to advances and interests in collecting high-resolution images of geological fields. Being advantageous in capturing high volume of data in short flights, a number of challenges have to overcome for efficient analysis of this data, especially while data acquisition, image interpretation and processing. We introduce a method that allows effective mapping of geological fields using photogrammetric data of surfaces, drainage area, water bodies etc, which will be captured by airborne vehicles like UAVs, we are not taking satellite images because of problems in adequate resolution, time when it is captured may be 1 yr back, availability problem, difficult to capture exact image, then night vision etc. This method includes advanced automated image interpretation technology and human data interaction to model structures and. First Geological structures will be detected from the primary photographic dataset and the equivalent three dimensional structures would then be identified by digital elevation model. We can calculate dip and its direction by using the above information. The structural map will be generated by adopting a specified methodology starting from choosing the appropriate camera, camera’s mounting system, UAVs design ( based on the area and application), Challenge in air borne systems like Errors in image orientation, payload problem, mosaicing and geo referencing and registering of different images to applying DEM. The paper shows the potential of using our method for accurate and efficient modeling of geological structures, capture particularly from remote, of inaccessible and hazardous sites.

Keywords: digital elevation model, mapping, photogrammetric data analysis, geological structures

Procedia PDF Downloads 686
3564 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies

Authors: Jaballah Jamil

Abstract:

KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.

Keywords: KLD social rating agency, investors' perception, investment decision, financial performance

Procedia PDF Downloads 439
3563 Agent-Based Modeling to Simulate the Dynamics of Health Insurance Markets

Authors: Haripriya Chakraborty

Abstract:

The healthcare system in the United States is considered to be one of the most inefficient and expensive systems when compared to other developed countries. Consequently, there are persistent concerns regarding the overall functioning of this system. For instance, the large number of uninsured individuals and high premiums are pressing issues that are shown to have a negative effect on health outcomes with possible life-threatening consequences. The Affordable Care Act (ACA), which was signed into law in 2010, was aimed at improving some of these inefficiencies. This paper aims at providing a computational mechanism to examine some of these inefficiencies and the effects that policy proposals may have on reducing these inefficiencies. Agent-based modeling is an invaluable tool that provides a flexible framework to model complex systems. It can provide an important perspective into the nature of some interactions that occur and how the benefits of these interactions are allocated. In this paper, we propose a novel and versatile agent-based model with realistic assumptions to simulate the dynamics of a health insurance marketplace that contains a mixture of private and public insurers and individuals. We use this model to analyze the characteristics, motivations, payoffs, and strategies of these agents. In addition, we examine the effects of certain policies, including some of the provisions of the ACA, aimed at reducing the uninsured rate and the cost of premiums to move closer to a system that is more equitable and improves health outcomes for the general population. Our test results confirm the usefulness of our agent-based model in studying this complicated issue and suggest some implications for public policies aimed at healthcare reform.

Keywords: agent-based modeling, healthcare reform, insurance markets, public policy

Procedia PDF Downloads 138
3562 Micro-Meso 3D FE Damage Modelling of Woven Carbon Fibre Reinforced Plastic Composite under Quasi-Static Bending

Authors: Aamir Mubashar, Ibrahim Fiaz

Abstract:

This research presents a three-dimensional finite element modelling strategy to simulate damage in a quasi-static three-point bending analysis of woven twill 2/2 type carbon fibre reinforced plastic (CFRP) composite on a micro-meso level using cohesive zone modelling technique. A meso scale finite element model comprised of a number of plies was developed in the commercial finite element code Abaqus/explicit. The interfaces between the plies were explicitly modelled using cohesive zone elements to allow for debonding by crack initiation and propagation. Load-deflection response of the CRFP within the quasi-static range was obtained and compared with the data existing in the literature. This provided validation of the model at the global scale. The outputs resulting from the global model were then used to develop a simulation model capturing the micro-meso scale material features. The sub-model consisted of a refined mesh representative volume element (RVE) modelled in texgen software, which was later embedded with cohesive elements in the finite element software environment. The results obtained from the developed strategy were successful in predicting the overall load-deflection response and the damage in global and sub-model at the flexure limit of the specimen. Detailed analysis of the effects of the micro-scale features was carried out.

Keywords: woven composites, multi-scale modelling, cohesive zone, finite element model

Procedia PDF Downloads 138
3561 Prevalence of Human Papillomavirus in Squamous Intraepithelial Lesions and Cervical Cancer in Women of the North of Chihuahua, Mexico

Authors: Estefania Ponce-Amaya, Ana Lidia Arellano-Ortiz, Cecilia Diaz-Hernandez, Jose Alberto Lopez-Diaz, Antonio De La Mora-Covarrubias, Claudia Lucia Vargas-Requena, Mauricio Salcedo-Vargas, Florinda Jimenez-Vega

Abstract:

Cervical Cancer (CC) is the second leading cause of death among women worldwide and it had been associated with a persistent infection of human papillomavirus (HPV). The goal of the current study was to identify the prevalence of HPV infection in women with abnormal Pap smear who were attended at Dysplasia Clinic of Ciudad Juarez, Mexico. Methods: Cervical samples from 146 patients, who attended the Colposcopy Clinic at Sanitary Jurisdiction II of Cd Juarez, were collected for histopathology and molecular study. DNA was isolated for the HPV detection by Polymerase Chain Reaction (PCR) using MY09/011 and GP5/6 primers. The associated risk factors were assessed by a questionnaire. The statistical analysis was performed by ANOVA, using EpiINFO V7 software. Results: HPV infection was present in 142 patients (97.3 %). The prevalence of HPV infection was distributed in a 96% of all evaluated groups, low-grade squamous intraepithelial lesion (LSIL), high-grade squamous intraepithelial lesion (HISIL) and CC. We found a statistical significance (α = <0.05) between gestation and number of births as risk factors. The median values showed an ascending tend according with the lesion progression. However, CC showed a statistically significant difference with respect to the pre-carcinogenic stages. Conclusions: In these Mexican patients exists a high prevalence of HPV infection, and for that reason, we are studying the most prevalent HPV genotypes in this population.

Keywords: cervical cancer, HPV, prevalence hpv, squamous intraepithelial lesion

Procedia PDF Downloads 320
3560 Requirements to Establish a Taxi Sharing System in an Urban Area

Authors: Morteza Ahmadpur, Ilgin Gokasar, Saman Ghaffarian

Abstract:

That Transportation system plays an important role in management of societies is an undeniable fact and it is one of the most challenging issues in human beings routine life. But by increasing the population in urban areas, the demand for transportation modes also increase. Accordingly, it is obvious that more flexible and dynamic transportation system is required to satisfy peoples’ requirements. Nowadays, there is significant increase in number of environmental issues all over the world which is because of human activities. New technological achievements bring new horizons for humans and so they changed the life style of humans in every aspect of their life and transportation is not an exception. By using new technology, societies can modernize their transportation system and increase the feasibility of their system. Real–time Taxi sharing systems is one of the novel and most modern systems all over the world. For establishing this kind of system in an urban area it is required to use the most advanced technologies in a transportation system. GPS navigation devices, computers and social networks are just some parts of this kind of system. Like carpooling, real-time taxi sharing is one of the best ways to better utilize the empty seats in most cars and taxis, thus decreasing energy consumption and transport costs. It can serve areas not covered by a public transit system and act as a transit feeder service. Taxi sharing is also capable of serving one-time trips, not only recurrent commute trips or scheduled trips. In this study, we describe the requirements and parameters that we need to establish a useful real-time ride sharing system for an urban area. The parameters and requirements of this study can be used in any urban area.

Keywords: transportation, intelligent transportation systems, ride-sharing, taxi sharing

Procedia PDF Downloads 427
3559 A Multi-Dimensional Neural Network Using the Fisher Transform to Predict the Price Evolution for Algorithmic Trading in Financial Markets

Authors: Cristian Pauna

Abstract:

Trading the financial markets is a widespread activity today. A large number of investors, companies, public of private funds are buying and selling every day in order to make profit. Algorithmic trading is the prevalent method to make the trade decisions after the electronic trading release. The orders are sent almost instantly by computers using mathematical models. This paper will present a price prediction methodology based on a multi-dimensional neural network. Using the Fisher transform, the neural network will be instructed for a low-latency auto-adaptive process in order to predict the price evolution for the next period of time. The model is designed especially for algorithmic trading and uses the real-time price series. It was found that the characteristics of the Fisher function applied at the nodes scale level can generate reliable trading signals using the neural network methodology. After real time tests it was found that this method can be applied in any timeframe to trade the financial markets. The paper will also include the steps to implement the presented methodology into an automated trading system. Real trading results will be displayed and analyzed in order to qualify the model. As conclusion, the compared results will reveal that the neural network methodology applied together with the Fisher transform at the nodes level can generate a good price prediction and can build reliable trading signals for algorithmic trading.

Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, neural network

Procedia PDF Downloads 160
3558 Influence of Hearing Aids on Non-Medically Treatable Deafness

Authors: Niragira Donatien

Abstract:

The progress of technology creates new expectations for patients. The world of deafness is no exception. In recent years, there have been considerable advances in the field of technologies aimed at assisting failing hearing. According to the usual medical vocabulary, hearing aids are actually orthotics. They do not replace an organ but compensate for a functional impairment. The amplifier hearing amplification is useful for a large number of people with hearing loss. Hearing aids restore speech audibility. However, their benefits vary depending on the quality of residual hearing. The hearing aid is not a "cure" for deafness. It cannot correct all affected hearing abilities. It should be considered as an aid to communicate who the best candidates for hearing aids are. The urge to judge from the audiogram alone should be resisted here, as audiometry only indicates the ability to detect non-verbal sounds. To prevent hearing aids from ending up in the drawer, it is important to ensure that the patient's disability situations justify the use of this type of orthosis. If the problems of receptive pre-fitting counselling are crucial, the person with hearing loss must be informed of the advantages and disadvantages of amplification in his or her case. Their expectations must be realistic. They also need to be aware that the adaptation process requires a good deal of patience and perseverance. They should be informed about the various models and types of hearing aids, including all the aesthetic, functional, and financial considerations. If the person's motivation "survives" pre-fitting counselling, we are in the presence of a good candidate for amplification. In addition to its relevance, hearing aids raise other questions: Should one or both ears be fitted? In short, all these questions show that the results found in this study significantly improve the quality of audibility in the patient, from where this technology must be made accessible everywhere in the world. So we want to progress with the technology.

Keywords: audiology, influence, hearing, madicaly, treatable

Procedia PDF Downloads 52
3557 Pavement Failures and Its Maintenance

Authors: Maulik L. Sisodia, Tirth K. Raval, Aarsh S. Mistry

Abstract:

This paper summarizes the ongoing researches about the defects in both flexible and rigid pavement and the maintenance in both flexible and rigid pavements. Various defects in pavements have been identified since the existence of both flexible and rigid pavement. Flexible Pavement failure is defined in terms of decreasing serviceability caused by the development of cracks, ruts, potholes etc. Flexible Pavement structure can be destroyed in a single season due to water penetration. Defects in flexible pavements is a problem of multiple dimensions, phenomenal growth of vehicular traffic (in terms of no. of axle loading of commercial vehicles), the rapid expansion in the road network, non-availability of suitable technology, material, equipment, skilled labor and poor funds allocation have all added complexities to the problem of flexible pavements. In rigid pavements due to different type of destress the failure like joint spalling, faulting, shrinkage cracking, punch out, corner break etc. Application of correction in the existing surface will enhance the life of maintenance works as well as that of strengthening layer. Maintenance of a road network involves a variety of operations, i.e., identification of deficiencies and planning, programming and scheduling for actual implementation in the field and monitoring. The essential objective should be to keep the road surface and appurtenances in good condition and to extend the life of the road assets to its design life. The paper describes lessons learnt from pavement failures and problems experienced during the last few years on a number of projects in India. Broadly, the activities include identification of defects and the possible cause there off, determination of appropriate remedial measures; implement these in the field and monitoring of the results.

Keywords: Flexible Pavements, Rigid Pavements, Defects, Maintenance

Procedia PDF Downloads 172
3556 Efficient Chess Board Representation: A Space-Efficient Protocol

Authors: Raghava Dhanya, Shashank S.

Abstract:

This paper delves into the intersection of chess and computer science, specifically focusing on the efficient representation of chess game states. We propose two methods: the Static Method and the Dynamic Method, each offering unique advantages in terms of space efficiency and computational complexity. The Static Method aims to represent the game state using a fixedlength encoding, allocating 192 bits to capture the positions of all pieces on the board. This method introduces a protocol for ordering and encoding piece positions, ensuring efficient storage and retrieval. However, it faces challenges in representing pieces no longer in play. In contrast, the Dynamic Method adapts to the evolving game state by dynamically adjusting the encoding length based on the number of pieces in play. By incorporating Alive Bits for each piece kind, this method achieves greater flexibility and space efficiency. Additionally, it includes provisions for encoding additional game state information such as castling rights and en passant squares. Our findings demonstrate that the Dynamic Method offers superior space efficiency compared to traditional Forsyth-Edwards Notation (FEN), particularly as the game progresses and pieces are captured. However, it comes with increased complexity in encoding and decoding processes. In conclusion, this study provides insights into optimizing the representation of chess game states, offering potential applications in chess engines, game databases, and artificial intelligence research. The proposed methods offer a balance between space efficiency and computational overhead, paving the way for further advancements in the field.

Keywords: chess, optimisation, encoding, bit manipulation

Procedia PDF Downloads 50
3555 Intelligent Control of Bioprocesses: A Software Application

Authors: Mihai Caramihai, Dan Vasilescu

Abstract:

The main research objective of the experimental bioprocess analyzed in this paper was to obtain large biomass quantities. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium made of peptone, meat extract and sodium chloride. The reactor was equipped with pH, temperature, dissolved oxygen, and agitation controllers. The operating parameters were 37 oC, 1.2 atm, 250 rpm and air flow rate of 15 L/min. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time-varying system vs. the classical control method based on a priori model.

Keywords: intelligent, control, fuzzy model, bioprocess optimization

Procedia PDF Downloads 327
3554 Formation of Nanochannels by Heavy Ions in Graphene Oxide Reinforced Carboxymethylcellulose Membranes for Proton Exchange Membrane Fuel Cells Applications

Authors: B. Kurbanova, M. Karibayev, N. Almas, K. Ospanov, K. Aimaganbetov, T. Kuanyshbekov, K. Akatan, S. Kabdrakhmanova

Abstract:

Proton exchange membranes (PEMs) operating at high temperatures above 100 °C with the excellent mechanical, chemical and thermochemical stability have been received much attention, because of their practical application of proton exchange membrane fuel cells (PEMFCs). Nowadays, a huge number of polymers and polymer-mixed various membranes have been investigated for this application, all of which offer both pros and cons. However, PEMFCs are still lack of ideal membranes with unique properties. In this work, carboxymethylcellulose (CMC) based membranes with dispersive graphene oxide (GO) sheets were fabricated and investigated for PEMFCs application. These membranes and pristine GO were studied by a combination of XRD, XPS, Raman, Brillouin, FTIR, thermo-mechanical analysis (TGA and Dynamic Mechanical Analysis) and SEM microscopy, while substantial studies on the proton transport properties were provided by Electrochemical Impedance Spectroscopy (EIS) measurements. It was revealed that the addition of CMC to the GO boosts proton conductivity of the whole membrane, while GO provides good mechanical and thermomechanical stability to the membrane. Further, the continuous and ordered nanochannels with well-tailored chemical structures were obtained by irradiation of heavy ions Kr⁺¹⁷ with an energy of 1.75 MeV/nucleon on the heavy ion accelerator. The formation of these nanochannels led to the significant increase of proton conductivity at 50% Relative Humidity. Also, FTIR and XPS measurement results show that ion irradiation eliminated the GO’s surface oxygen chemical bonds (C=O, C-O), and led to the formation of C = C, C – C bonds, whereas these changes connected with an increase in conductivity.

Keywords: proton exchange membranes, graphene oxide, fuel cells, carboxymethylcellulose, ion irradiation

Procedia PDF Downloads 92
3553 The Agri-Environmental Instruments in Agricultural Policy to Reduce Nitrogen Pollution

Authors: Flavio Gazzani

Abstract:

Nitrogen is an important agricultural input that is critical for the production. However, the introduction of large amounts of nitrogen into the environment has a number of undesirable impacts such as: the loss of biodiversity, eutrophication of waters and soils, drinking water pollution, acidification, greenhouse gas emissions, human health risks. It is a challenge to sustain or increase food production and at the same time reduce losses of reactive nitrogen to the environment, but there are many potential benefits associated with improving nitrogen use efficiency. Reducing nutrient losses from agriculture is crucial to the successful implementation of agricultural policy. Traditional regulatory instruments applied to implement environmental policies to reduce environmental impacts from nitrogen fertilizers, despite some successes, failed to address many environmental challenges and imposed high costs on the society to achieve environmental quality objectives. As a result, economic instruments started to be recognized for their flexibility and cost-effectiveness. The objective of the research project is to analyze the potential for increased use of market-based instruments in nitrogen control policy. The report reviews existing knowledge, bringing different studies together to assess the global nitrogen situation and the most relevant environmental management policy that aims to reduce pollution in a sustainable way without affect negatively agriculture production and food price. This analysis provides some guidance on how different market based instruments might be orchestrated in an overall policy framework to the development and assessment of sustainable nitrogen management from the economics, environmental and food security point of view.

Keywords: nitrogen emissions, chemical fertilizers, eutrophication, non-point of source pollution, dairy farm

Procedia PDF Downloads 329
3552 The Analysis of Spatial Development: Malekan City

Authors: Rahim Sarvar, Bahram Azadbakht, Samira Safaee

Abstract:

The leading goal of all planning is to attain sustainable development, regional balance, suitable distribution of activities, and maximum use of environmental capabilities in the process of development of regions. Intensive concentration of population and activities in one or some limited geographical locality is of main characteristics of most developing countries, especially Iran. Not considering the long-term programs and relying on temporary and superficial plans by people in charge of decision-making to attain their own objectives causes obstacles, resulting in unbalance development. The basic reason for these problems is to establish the development planning while economic aspects are merely considered and any attentions are not paid to social and regional feedbacks, what have been ending up to social and economic inequality, unbalanced distribution of development among the regions as well. In addition to study of special planning and structure of the county of Malekan, this research tries to achieve some other aims, i.e. recognition and introduction of approaches in order to utilize resources optimally, to distribute the population, activities, and facilities in optimum fashion, and to investigate and identify the spatial development potentials of the County. Based on documentary, descriptive, analytical, and field studies, this research employs maps to analyze the data, investigates the variables, and applies SPSS, Auto CAD, and Arc View software. The results show that the natural factors have a significant influence on spatial layout of settlements; distribution of facilities and functions are not equal among the rural districts of the county; and there is a spatial equivalence in the region area between population and number of settlements.

Keywords: development, entropy index, Malekan City, planning, regional equilibrium

Procedia PDF Downloads 439
3551 Analytical Description of Disordered Structures in Continuum Models of Pattern Formation

Authors: Gyula I. Tóth, Shaho Abdalla

Abstract:

Even though numerical simulations indeed have a significant precursory/supportive role in exploring the disordered phase displaying no long-range order in pattern formation models, studying the stability properties of this phase and determining the order of the ordered-disordered phase transition in these models necessitate an analytical description of the disordered phase. First, we will present the results of a comprehensive statistical analysis of a large number (1,000-10,000) of numerical simulations in the Swift-Hohenberg model, where the bulk disordered (or amorphous) phase is stable. We will show that the average free energy density (over configurations) converges, while the variance of the energy density vanishes with increasing system size in numerical simulations, which suggest that the disordered phase is a thermodynamic phase (i.e., its properties are independent of the configuration in the macroscopic limit). Furthermore, the structural analysis of this phase in the Fourier space suggests that the phase can be modeled by a colored isotropic Gaussian noise, where any instant of the noise describes a possible configuration. Based on these results, we developed the general mathematical framework of finding a pool of solutions to partial differential equations in the sense of continuous probability measure, which we will present briefly. Applying the general idea to the Swift-Hohenberg model we show, that the amorphous phase can be found, and its properties can be determined analytically. As the general mathematical framework is not restricted to continuum theories, we hope that the proposed methodology will open a new chapter in studying disordered phases.

Keywords: fundamental theory, mathematical physics, continuum models, analytical description

Procedia PDF Downloads 134
3550 1-D Convolutional Neural Network Approach for Wheel Flat Detection for Freight Wagons

Authors: Dachuan Shi, M. Hecht, Y. Ye

Abstract:

With the trend of digitalization in railway freight transport, a large number of freight wagons in Germany have been equipped with telematics devices, commonly placed on the wagon body. A telematics device contains a GPS module for tracking and a 3-axis accelerometer for shock detection. Besides these basic functions, it is desired to use the integrated accelerometer for condition monitoring without any additional sensors. Wheel flats as a common type of failure on wheel tread cause large impacts on wagons and infrastructure as well as impulsive noise. A large wheel flat may even cause safety issues such as derailments. In this sense, this paper proposes a machine learning approach for wheel flat detection by using car body accelerations. Due to suspension systems, impulsive signals caused by wheel flats are damped significantly and thus could be buried in signal noise and disturbances. Therefore, it is very challenging to detect wheel flats using car body accelerations. The proposed algorithm considers the envelope spectrum of car body accelerations to eliminate the effect of noise and disturbances. Subsequently, a 1-D convolutional neural network (CNN), which is well known as a deep learning method, is constructed to automatically extract features in the envelope-frequency domain and conduct classification. The constructed CNN is trained and tested on field test data, which are measured on the underframe of a tank wagon with a wheel flat of 20 mm length in the operational condition. The test results demonstrate the good performance of the proposed algorithm for real-time fault detection.

Keywords: fault detection, wheel flat, convolutional neural network, machine learning

Procedia PDF Downloads 131
3549 Harmonic Assessment and Mitigation in Medical Diagonesis Equipment

Authors: S. S. Adamu, H. S. Muhammad, D. S. Shuaibu

Abstract:

Poor power quality in electrical power systems can lead to medical equipment at healthcare centres to malfunction and present wrong medical diagnosis. Equipment such as X-rays, computerized axial tomography, etc. can pollute the system due to their high level of harmonics production, which may cause a number of undesirable effects like heating, equipment damages and electromagnetic interferences. The conventional approach of mitigation uses passive inductor/capacitor (LC) filters, which has some drawbacks such as, large sizes, resonance problems and fixed compensation behaviours. The current trends of solutions generally employ active power filters using suitable control algorithms. This work focuses on assessing the level of Total Harmonic Distortion (THD) on medical facilities and various ways of mitigation, using radiology unit of an existing hospital as a case study. The measurement of the harmonics is conducted with a power quality analyzer at the point of common coupling (PCC). The levels of measured THD are found to be higher than the IEEE 519-1992 standard limits. The system is then modelled as a harmonic current source using MATLAB/SIMULINK. To mitigate the unwanted harmonic currents a shunt active filter is developed using synchronous detection algorithm to extract the fundamental component of the source currents. Fuzzy logic controller is then developed to control the filter. The THD without the active power filter are validated using the measured values. The THD with the developed filter show that the harmonics are now within the recommended limits.

Keywords: power quality, total harmonics distortion, shunt active filters, fuzzy logic

Procedia PDF Downloads 479
3548 Cognitive Impairment in Chronic Renal Patients on Hemodialysis

Authors: Fabiana Souza Orlandi, Juliana Gomes Duarte, Gabriela Dutra Gesualdo

Abstract:

Chronic renal disease (CKD), accompanied by hemodialysis, causes chronic renal failure in a number of situations that compromises not only physical, personal and environmental aspects, but also psychological, social and family aspects. Objective: To verify the level of cognitive impairment of chronic renal patients on hemodialysis. Methodology: This is a descriptive, cross-sectional study. The present study was performed in a Dialysis Center of a city in the interior of the State of São Paulo. The inclusion criteria were: being 18 years or older; have a medical diagnosis of CKD; being in hemodialysis treatment in this unit; and agree to participate in the research, with the signature of the Informed Consent (TCLE). A total of 115 participants were evaluated through the Participant Characterization Instrument and the Addenbrooke Cognitive Exam - Revised Version (ACE-R), being scored from 0 to 100, stipulating the cut-off note for the complete battery <78 and subdivided into five domains: attention and guidance; memory; fluency; language; (66.9%) and caucasian (54.7%), 53.7 (±14.8) years old. Most of the participants were retired (74.7%), with incomplete elementary schooling (36.5%) and the average time of treatment was 46 months. Most of the participants (61.3%) presented impairment in the area of attention and orientation, 80.4% in the spatial visual domain. Regarding the total ACE-R score, 75.7% of the participants presented scores below the established cut grade. Conclusion: There was a high percentage (75.7%) below the cut-off score established for ACE-R, suggesting that there may be some cognitive impairment among these participants, since the instrument only performs a screening on cognitive health. The results of the study are extremely important so that possible interventions can be traced in order to minimize impairment, thus improving the quality of life of chronic renal patients.

Keywords: cognition, chronic renal insufficiency, adult health, dialysis

Procedia PDF Downloads 366
3547 Performance of the New Laboratory-Based Algorithm for HIV Diagnosis in Southwestern China

Authors: Yanhua Zhao, Chenli Rao, Dongdong Li, Chuanmin Tao

Abstract:

The Chinese Centers for Disease Control and Prevention (CCDC) issued a new laboratory-based algorithm for HIV diagnosis on April 2016, which initially screens with a combination HIV-1/HIV-2 antigen/antibody fourth-generation immunoassay (IA) followed, when reactive, an HIV-1/HIV-2 undifferentiated antibody IA in duplicate. Reactive specimens with concordant results undergo supplemental tests with western blots, or HIV-1 nucleic acid tests (NATs) and non-reactive specimens with discordant results receive HIV-1 NATs or p24 antigen tests or 2-4 weeks follow-up tests. However, little data evaluating the application of the new algorithm have been reported to date. The study was to evaluate the performance of new laboratory-based HIV diagnostic algorithm in an inpatient population of Southwest China over the initial 6 months by compared with the old algorithm. Plasma specimens collected from inpatients from May 1, 2016, to October 31, 2016, are submitted to the laboratory for screening HIV infection performed by both the new HIV testing algorithm and the old version. The sensitivity and specificity of the algorithms and the difference of the categorized numbers of plasmas were calculated. Under the new algorithm for HIV diagnosis, 170 of the total 52 749 plasma specimens were confirmed as positively HIV-infected (0.32%). The sensitivity and specificity of the new algorithm were 100% (170/170) and 100% (52 579/52 579), respectively; while 167 HIV-1 positive specimens were identified by the old algorithm with sensitivity 98.24% (167/170) and 100% (52 579/52 579), respectively. Three acute HIV-1 infections (AHIs) and two early HIV-1 infections (EHIs) were identified by the new algorithm; the former was missed by old procedure. Compared with the old version, the new algorithm produced fewer WB-indeterminate results (2 vs. 16, p = 0.001), which led to fewer follow-up tests. Therefore, the new HIV testing algorithm is more sensitive for detecting acute HIV-1 infections with maintaining the ability to verify the established HIV-1 infections and can dramatically decrease the greater number of WB-indeterminate specimens.

Keywords: algorithm, diagnosis, HIV, laboratory

Procedia PDF Downloads 401
3546 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 244
3545 Efficacy and Safety of Probiotic Treatment in Patients with Liver Cirrhosis: A Systematic Review and Meta-Analysis

Authors: Samir Malhotra, Rajan K. Khandotra, Rakesh K. Dhiman, Neelam Chadha

Abstract:

There is paucity of data about safety and efficacy of probiotic treatment on patient outcomes in cirrhosis. Specifically, it is important to know whether probiotics can improve mortality, hepatic encephalopathy (HE), number of hospitalizations, ammonia levels, quality of life, and adverse events. Probiotics may improve outcomes in patients with acute or chronic HE. However, it is also important to know whether probiotics can prevent development of HE, even in situations where patients do not have acute HE at the time of administration. It is also important to know if probiotics are useful as primary prophylaxis of HE. We aimed to conduct an updated systematic review and meta-analysis to evaluate the safety and efficacy of probiotics in patients with cirrhosis. We searched PubMed, Cochrane library, Embase, Scopus, SCI, Google Scholar, conference proceedings, and references of included studies till June 2017 to identify randomised clinical trials comparing probiotics with other treatments in cirrhotics. Data was analyzed using MedCalc. Probiotics had no effect on mortality but significantly reduced HE (14 trials, 1073 patients, OR 0.371; 95% CI 0.282 to 0.489). There was not enough data to conduct a meta-analysis on outcomes like hospitalizations and quality of life. The effect on plasma ammonia levels was not significant (SMD -0.429; 95%CI -1.034 – 0.177). There was no difference in adverse events. To conclude, although the included studies had a high risk of bias, the available evidence does suggest a beneficial effect on HE. Larger studies with longer periods of follow-up are needed to determine if probiotics can reduce all-cause mortality.

Keywords: cirrhosis, hepatic encephalopathy, meta-analysis, probiotic

Procedia PDF Downloads 201
3544 Experimental Monitoring of the Parameters of the Ionosphere in the Local Area Using the Results of Multifrequency GNSS-Measurements

Authors: Andrey Kupriyanov

Abstract:

In recent years, much attention has been paid to the problems of ionospheric disturbances and their influence on the signals of global navigation satellite systems (GNSS) around the world. This is due to the increase in solar activity, the expansion of the scope of GNSS, the emergence of new satellite systems, the introduction of new frequencies and many others. The influence of the Earth's ionosphere on the propagation of radio signals is an important factor in many applied fields of science and technology. The paper considers the application of the method of transionospheric sounding using measurements from signals from Global Navigation Satellite Systems to determine the TEC distribution and scintillations of the ionospheric layers. To calculate these parameters, the International Reference Ionosphere (IRI) model of the ionosphere, refined in the local area, is used. The organization of operational monitoring of ionospheric parameters is analyzed using several NovAtel GPStation6 base stations. It allows performing primary processing of GNSS measurement data, calculating TEC and fixing scintillation moments, modeling the ionosphere using the obtained data, storing data and performing ionospheric correction in measurements. As a result of the study, it was proved that the use of the transionospheric sounding method for reconstructing the altitude distribution of electron concentration in different altitude range and would provide operational information about the ionosphere, which is necessary for solving a number of practical problems in the field of many applications. Also, the use of multi-frequency multisystem GNSS equipment and special software will allow achieving the specified accuracy and volume of measurements.

Keywords: global navigation satellite systems (GNSS), GPstation6, international reference ionosphere (IRI), ionosphere, scintillations, total electron content (TEC)

Procedia PDF Downloads 181
3543 The Television as an Affordable and Effective Way to Promote Healthy Diet and Physical Activity to Prevent or Treat Obesity

Authors: P. Gil Del Álamo, J. García Pereda, A. Castañeda De La Paz, D. Arazola Lopez, M. D. Cubiles De La Vega, A. Enguíx González, J. M. Muñoz Pichardo

Abstract:

In the last decades, obesity has more than doubled and is, with overweight, the second leading cause of preventable death. Despite multiple strategies against obesity, no country to date has reduced the number of obese people. To achieve World Health Organization’s target to reverse this tendency we need dramatic and different actions to engage the civil society in creating demand for a healthy style of life. The objective of this study is to demonstrate that a social media as the television can be used to convince the civil society that a healthy nutrition and physical activity are affordable, effective and necessary to prevent and to treat the obesity. Methodology: 61 individuals (34 women and 27 men) with obesity (mean BMI 45,51) were recruited to follow during 22 weeks an intensive lifestyle intervention in order to lose weight in a healthy manner. They were not isolated or moved from their usual environment. This program included endocrinological and nutritional assessment, promotion of physical activity and psychological support. BMI was measured every week. Time to leave obesity between men and women was analyzed with a survival analysis. Results: BMI decreased in all the cases. Analysing Time to leave obesity, around the week 30, 25% of men leave the obesity and around the week 39, 25% of women leave the obesity too. Conclusion: We demonstrate the audience that improving the quality of the diet and increasing the physical activity is a realistic way to lose weight. This evidence can encourage the people to act in their own self-interest changing their style of life in order to prevent or to reduce their overweight.

Keywords: obesity epidemic, obesity prevention, obesity strategies, social media

Procedia PDF Downloads 290
3542 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory

Authors: Hiba El Assibi

Abstract:

This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.

Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory

Procedia PDF Downloads 55
3541 Assessment of Nurse's Knowledge Toward Infection Control for Wound Care in Governmental Hospital at Amran City-Yemen

Authors: Fares Mahdi

Abstract:

Background: Infection control is an important concern for all health care professionals, especially nurses. Nurses have a higher risk for both self-acquiring and transmitting infections to other patients. Aim of this study: to assess nurses' knowledge regarding infection control for wound care. Methodology: a descriptive research design was used in the study. The total number studied sample was 200 nurses, were conducting in Amran Public Hospitals in Amran City- Yemen. The study covered sample nurses in the hospital according to the study population; a standard closed-ended questionnaire was used to collect the data. Results: The results showed less than half (37.5 %) of nurses were from 22 May Hospital, also followed by (62.5%) of them were from Maternal and Child Hospital. Also according to the department name. Most (22.5%) of nurses worked in an intensive care unit, followed by (20%) of them were working in the pediatric world, also about (19%) of them were working in the surgical department. While in finally, only about (8.5%) of them worked from another department. According to course training, The results showed about (21%) of nurses had course training in wound care management. At the same time, others (79%) of them have not had course training in wound care management. According to the total nurse's knowledge of infection control for wound care, that find more than two-thirds (68%) of nurses had fair knowledge according to total all of nurse's knowledge of infection control wound care. Conclusion:The results showed that more than two-thirds (68%) of nurses had fair knowledge according to total all of the nurse's knowledge of infection control for wound care. Recommendations: There should be providing training program about infection control masseurs and it's important for new employees of nurses. Providing continuing refreshment training courses about infection control programs and about evidence-based practice in infection control for all health care teams.

Keywords: assessment, knowledge, infection control, wound care, nurses, amran hospitals

Procedia PDF Downloads 95
3540 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline

Authors: Kenan Morani, Esra Kaya Ayana

Abstract:

This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.

Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation

Procedia PDF Downloads 131