Search results for: frequency rate of change
11506 Piezoelectric based Passive Vibration Control of Composite Turbine Blade using Shunt Circuit
Authors: Kouider Bendine, Zouaoui Satla, Boukhoulda Farouk Benallel, Shun-Qi Zhang
Abstract:
Turbine blades are subjected to a variety of loads, lead to an undesirable vibration. Such vibration can cause serious damages or even lead to a total failure of the blade. The present paper addresses the vibration control of turbine blade. The study aims to propose a passive vibration control using piezoelectric material. the passive control is effectuated by shunting an RL circuit to the piezoelectric patch in a parallel configuration. To this end, a Finite element model for the blade with the piezoelectric patch is implemented in ANSYS APDL. The model is then subjected to a harmonic frequency-based analysis for the case of control on and off. The results show that the proposed methodology was able to reduce blade vibration by 18%.Keywords: blade, active piezoelectric vibration control, finite element., shunt circuit
Procedia PDF Downloads 10811505 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements
Authors: Denis A. Sokolov, Andrey V. Mazurkevich
Abstract:
In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement
Procedia PDF Downloads 6411504 Anaerobic Digestion of Spent Wash through Biomass Development for Obtaining Biogas
Authors: Sachin B. Patil, Narendra M. Kanhe
Abstract:
A typical cane molasses based distillery generates 15 L of waste water per liter of alcohol production. Distillery waste with COD of over 1,00,000 mg/l and BOD of over 30,000 mg/l ranks high amongst the pollutants produced by industries both in magnitude and strength. Treatment and safe disposal of this waste is a challenging task since long. The high strength of waste water renders aerobic treatment very expensive and physico-chemical processes have met with little success. Thermophilic anaerobic treatment of distillery waste may provide high degree of treatment and better recovery of biogas. It may prove more feasible in most part of tropical country like India, where temperature is suitable for thermophilic micro-organisms. Researchers have reviled that, at thermophilic conditions due to increased destruction rate of organic matter and pathogens, higher digestion rate can be achieved. Literature review reveals that the variety of anaerobic reactors including anaerobic lagoon, conventional digester, anaerobic filter, two staged fixed film reactors, sludge bed and granular bed reactors have been studied, but little attempts have been made to evaluate the usefulness of thermophilic anaerobic treatment for treating distillery waste. The present study has been carried out, to study feasibility of thermophilic anaerobic digestion to facilitate the design of full scale reactor. A pilot scale anaerobic fixed film fixed bed reactor (AFFFB) of capacity 25m3 was designed, fabricated, installed and commissioned for thermophilic (55-65°C) anaerobic digestion at a constant pH of 6.5-7.5, because these temperature and pH ranges are considered to be optimum for biogas recovery from distillery wastewater. In these conditions, working of the reactor was studied, for different hydraulic retention times (HRT) (0.25days to 12days) and variable organic loading rates (361.46 to 7.96 Kg COD/m3d). The parameters such as flow rate and temperature, various chemical parameters such as pH, chemical oxygen demands (COD), biogas quantity, and biogas composition were regularly monitored. It was observed that, with the increase in OLR, the biogas production was increased, but the specific biogas yield decreased. Similarly, with the increase in HRT, the biogas production got decrease, but the specific biogas yield was increased. This may also be due to the predominant activity of acid producers to methane producers at the higher substrate loading rates. From the present investigation, it can be concluded that for thermophilic conditions the highest COD removal percentage was obtained at an HRT of 08 days, thereafter it tends to decrease from 8 to 12 days HRT. There is a little difference between COD removal efficiency of 8 days HRT (74.03%) and 5 day HRT (78.06%), therefore it would not be feasible to increase the reactor size by 1.5 times for mere 4 percent more efficiency. Hence, 5 days HRT is considered to be optimum, at which the biogas yield was 98 m3/day and specific biogas yield was 0.385 CH4 m3/Kg CODr.Keywords: spent wash, anaerobic digestion, biomass, biogas
Procedia PDF Downloads 26811503 Relative Importance of Contact Constructs to Acute Respiratory Illness in General Population in Hong Kong
Authors: Kin On Kwok, Vivian Wei, Benjamin Cowling, Steven Riley, Jonathan Read
Abstract:
Background: The role of social contact behavior measured in different contact constructs in the transmission of respiratory pathogens with acute respiratory illness (ARI) remains unclear. We, therefore, aim to depict the individual pattern of ARI in the community and investigate the association between different contact dimensions and ARI in Hong Kong. Methods: Between June 2013 and September 2013, 620 subjects participated in the last two waves of recruitment of the population based longitudinal phone social contact survey. Some of the subjects in this study are from the same household. They are also provided with the symptom diaries to self-report any acute respiratory illness related symptoms between the two days of phone recruitment. Data from 491 individuals who were not infected on the day of phone recruitment and returned the symptom diaries after the last phone recruitment were used for analysis. Results: After adjusting different follow-up periods among individuals, the overall incidence rate of ARI was 1.77 per 100 person-weeks. Over 75% ARI episodes involve running nose, cough, sore throat, which are followed by headache (55%), malagia (35%) and fever (18%). Using a generalized estimating equation framework accounting for the cluster effect of subjects living in the same household, we showed that both daily number of locations visited with contacts and the number of contacts, explained the ARI incidence rate better than only one single contact construct. Conclusion: Our result suggests that it is the intertwining property of contact quantity (number of contacts) and contact intensity (ratio of subject-to-contact) that governs the infection risk by a collective set of respiratory pathogens. Our results provide empirical evidence that multiple contact constructs should be incorporated in the mathematical transmission models to feature a more realistic dynamics of respiratory disease.Keywords: acute respiratory illness, longitudinal study, social contact, symptom diaries
Procedia PDF Downloads 26211502 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine
Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta
Abstract:
The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient
Procedia PDF Downloads 10811501 Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement
Authors: M. Z. Kurian, M. V. Chidananda Murthy, H. S. Guruprasad
Abstract:
An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested.Keywords: advanced b-spline, image super-resolution, mean square error (MSE), peak signal to noise ratio (PSNR), resolution down converter
Procedia PDF Downloads 40211500 The Jurisprudential Evolution of Corruption Offenses in Spain: Before and after the Economic Crisis
Authors: Marta Fernandez Cabrera
Abstract:
The period of economic boom generated by the housing bubble created a climate of social indifference to the problem of corruption. This resulted in the persecution and conviction for these criminal offenses being low. After the economic recession, social awareness about the problem of corruption has increased. This has led to the Spanish citizenship requiring the public authorities to try to end the problem in the most effective way possible. In order to respond to the continuous social demands that require an exemplary punishment, the legislator has made changes in crimes against the public administration in the Spanish Criminal Code. However, from the point of view of criminal law, the social change has not served to modify only the law, but also the jurisprudence. After the recession, judges are punishing more severely these conducts than in the past. Before the crisis, it was usual for criminal judges to divert relevant behavior to other areas of the legal system such as administrative law and acquit in the criminal field. Criminal judges have considered that administrative law already has mechanisms that can effectively deal with this type of behavior in order to respect the principle of subsidiarity or ultima ratio. It has also been usual for criminal judges to acquit civil servants due to the absence of requirements unrelated to the applicable offense. For example, they have required an economic damage to the public administration when the offense in the criminal code does not require it. Nevertheless, for some years, these arguments have either partially disappeared or considerably transformed. Since 2010, a jurisprudential stream has been consolidated that aims to provide a more severe response to corruption than it had received until now. This change of opinion, together with greater prosecution of these behaviors by judges and prosecutors, has led to a significant increase in the number of individuals convicted of corruption crimes. This paper has two objectives. The first one is to show that even though judges apply the law impartially, they are flexible to social changes. The second one is to identify the erroneous arguments the courts have used up until now. To carry out the present paper, it has been done a detailed analysis of the judgments of the supreme court before and after the year 2010. Therefore, the jurisprudential analysis is complemented with the statistical data on corruption available.Keywords: corruption, public administration, social perception, ultima ratio principle
Procedia PDF Downloads 14911499 Challenges of Carbon Trading Schemes in Africa
Authors: Bengan Simbarashe Manwere
Abstract:
The entire African continent, comprising 55 countries, holds a 2% share of the global carbon market. The World Bank attributes the continent’s insignificant share and participation in the carbon market to the limited access to electricity. Approximately 800 million people spread across 47 African countries generate as much power as Spain, with a population of 45million. Only South Africa and North Africa have carbon-reduction investment opportunities on the continent and dominate the 2% market share of the global carbon market. On the back of the 2015 Paris Agreement, South Africa signed into law the Carbon Tax Act 15 of 2019 and the Customs and Excise Amendment Act 13 of 2019 (Gazette No. 4280) on 1 June 2019. By these laws, South Africa was ushered into the league of active global carbon market players. By increasing the cost of production by the rate of R120/tCO2e, the tax intentionally compels the internalization of pollution as a cost of production and, relatedly, stimulate investment in clean technologies. The first phase covered the 1 June 2019 – 31 December 2022 period during which the tax was meant to escalate at CPI + 2% for Scope 1 emitters. However, in the second phase, which stretches from 2023 to 2030, the tax will escalate at the inflation rate only as measured by the consumer price index (CPI). The Carbon Tax Act provides for carbon allowances as mitigation strategies to limit agents’ carbon tax liability by up to 95% for fugitive and process emissions. Although the June 2019 Carbon Tax Act explicitly makes provision for a carbon trading scheme (CTS), the carbon trading regulations thereof were only finalised in December 2020. This points to a delay in the establishment of a carbon trading scheme (CTS). Relatedly, emitters in South Africa are not able to benefit from the 95% reduction in effective carbon tax rate from R120/tCO2e to R6/tCO2e as the Johannesburg Stock Exchange (JSE) has not yet finalized the establishment of the market for trading carbon credits. Whereas most carbon trading schemes have been designed and constructed from the beginning as new tailor-made systems in countries the likes of France, Australia, Romania which treat carbon as a financial product, South Africa intends, on the contrary, to leverage existing trading infrastructure of the Johannesburg Stock Exchange (JSE) and the Clearing and Settlement platforms of Strate, among others, in the interest of the Paris Agreement timelines. Therefore the carbon trading scheme will not be constructed from scratch. At the same time, carbon will be treated as a commodity in order to align with the existing institutional and infrastructural capacity. This explains why the Carbon Tax Act is silent about the involvement of the Financial Sector Conduct Authority (FSCA).For South Africa, there is need to establish they equilibrium stability of the CTS. This is important as South Africa is an innovator in carbon trading and the successful trading of carbon credits on the JSE will lead to imitation by early adopters first, followed by the middle majority thereafter.Keywords: carbon trading scheme (CTS), Johannesburg stock exchange (JSE), carbon tax act 15 of 2019, South Africa
Procedia PDF Downloads 7711498 Characterization of Leakage Current on the Surface of Porcelain Insulator under Contaminated Conditions
Authors: Hocine Terrab , Abdelhafid Bayadi, Adel Kara, Ayman El-Hag
Abstract:
Insulator flashover under polluted conditions has been a serious threat on the reliability of power systems. It is known that the flashover process is mainly affected by the environmental conditions such as; the pollution level and humidity. Those are the essential parameters influencing the wetting process. This paper presents an investigation of the characteristics of leakage current (LC) developed on the surface of porcelain insulator at contaminated conditions under AC voltage. The study is done in an artificial fog chamber and the LC is characterized for different stages; dry, wetted and presence of discharge activities. Time-frequency and spectral analysis are adopted to calculate the evolution of LC characteristics with various stages prior to flashover occurrence. The preliminary results could be used in analysing the LC to develop more effective diagnosis of early signs of dry band arcing as an indication for insulation washing.Keywords: flashover, harmonic components, leakage current, phase angle, statistical analysis
Procedia PDF Downloads 43611497 Low Cost Surface Electromyographic Signal Amplifier Based on Arduino Microcontroller
Authors: Igor Luiz Bernardes de Moura, Luan Carlos de Sena Monteiro Ozelim, Fabiano Araujo Soares
Abstract:
The development of a low cost acquisition system of S-EMG signals which are reliable, comfortable for the user and with high mobility shows to be a relevant proposition in modern biomedical engineering scenario. In the study, the sampling capacity of the Arduino microcontroller Atmel Atmega328 with an A/D converter with 10-bit resolution and its reconstructing capability of a signal of surface electromyography are analyzed. An electronic circuit to capture the signal through two differential channels was designed, signals from Biceps Brachialis of a healthy man of 21 years was acquired to test the system prototype. ARV, MDF, MNF and RMS estimators were used to compare de acquired signals with physiological values. The Arduino was configured with a sampling frequency of 1.5 kHz for each channel, and the tests with the circuit designed offered a SNR of 20.57dB.Keywords: electromyography, Arduino, low-cost, atmel atmega328 microcontroller
Procedia PDF Downloads 37111496 Design and Implementation of Campus Wireless Networking for Sharing Resources in Federal Polytechnic Bauchi, Bauchi State, Nigeria
Authors: Hassan Abubakar
Abstract:
This paper will serve as a guide to good design and implementation of wireless networking for campus institutions in Nigeria. It can be implemented throughout the primary, secondary and tertiary institutions. This paper describe the some technical functions, standard configurations and layouts of the 802.11 wireless LAN(Local Area Network) that can be implemented across the campus network. The paper also touches upon the wireless infrastructure standards involved with enhanced services, such as voice over wireless and wireless guest hotspot. The paper also touch the benefits derived from implementing campus wireless network and share some lights on how to arrive at the success in increasing the performance of wireless and using the campus wireless to share resources like software applications, printer and documents.Keywords: networking, standards, wireless local area network (WLAN), radio frequency (RF), campus
Procedia PDF Downloads 42011495 Nanostructured Pt/MnO2 Catalysts and Their Performance for Oxygen Reduction Reaction in Air Cathode Microbial Fuel Cell
Authors: Maksudur Rahman Khan, Kar Min Chan, Huei Ruey Ong, Chin Kui Cheng, Wasikur Rahman
Abstract:
Microbial fuel cells (MFCs) represent a promising technology for simultaneous bioelectricity generation and wastewater treatment. Catalysts are significant portions of the cost of microbial fuel cell cathodes. Many materials have been tested as aqueous cathodes, but air-cathodes are needed to avoid energy demands for water aeration. The sluggish oxygen reduction reaction (ORR) rate at air cathode necessitates efficient electrocatalyst such as carbon supported platinum catalyst (Pt/C) which is very costly. Manganese oxide (MnO2) was a representative metal oxide which has been studied as a promising alternative electrocatalyst for ORR and has been tested in air-cathode MFCs. However, the single MnO2 has poor electric conductivity and low stability. In the present work, the MnO2 catalyst has been modified by doping Pt nanoparticle. The goal of the work was to improve the performance of the MFC with minimum Pt loading. MnO2 and Pt nanoparticles were prepared by hydrothermal and sol-gel methods, respectively. Wet impregnation method was used to synthesize Pt/MnO2 catalyst. The catalysts were further used as cathode catalysts in air-cathode cubic MFCs, in which anaerobic sludge was inoculated as biocatalysts and palm oil mill effluent (POME) was used as the substrate in the anode chamber. The as-prepared Pt/MnO2 was characterized comprehensively through field emission scanning electron microscope (FESEM), X-Ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and cyclic voltammetry (CV) where its surface morphology, crystallinity, oxidation state and electrochemical activity were examined, respectively. XPS revealed Mn (IV) oxidation state and Pt (0) nanoparticle metal, indicating the presence of MnO2 and Pt. Morphology of Pt/MnO2 observed from FESEM shows that the doping of Pt did not cause change in needle-like shape of MnO2 which provides large contacting surface area. The electrochemical active area of the Pt/MnO2 catalysts has been increased from 276 to 617 m2/g with the increase in Pt loading from 0.2 to 0.8 wt%. The CV results in O2 saturated neutral Na2SO4 solution showed that MnO2 and Pt/MnO2 catalysts could catalyze ORR with different catalytic activities. MFC with Pt/MnO2 (0.4 wt% Pt) as air cathode catalyst generates a maximum power density of 165 mW/m3, which is higher than that of MFC with MnO2 catalyst (95 mW/m3). The open circuit voltage (OCV) of the MFC operated with MnO2 cathode gradually decreased during 14 days of operation, whereas the MFC with Pt/MnO2 cathode remained almost constant throughout the operation suggesting the higher stability of the Pt/MnO2 catalyst. Therefore, Pt/MnO2 with 0.4 wt% Pt successfully demonstrated as an efficient and low cost electrocatalyst for ORR in air cathode MFC with higher electrochemical activity, stability and hence enhanced performance.Keywords: microbial fuel cell, oxygen reduction reaction, Pt/MnO2, palm oil mill effluent, polarization curve
Procedia PDF Downloads 56111494 Towards a Strategic Framework for State-Level Epistemological Functions
Authors: Mark Darius Juszczak
Abstract:
While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program
Procedia PDF Downloads 8111493 Exploring Research Trends and Topics in Intervention on Metabolic Syndrome Using Network Analysis
Authors: Lee Soo-Kyoung, Kim Young-Su
Abstract:
This study established a network related to metabolic syndrome intervention by conducting a social network analysis of titles, keywords, and abstracts, and it identified emerging topics of research. It visualized an interconnection between critical keywords and investigated their frequency of appearance to construe the trends in metabolic syndrome intervention measures used in studies conducted over 38 years (1979–2017). It examined a collection of keywords from 8,285 studies using text rank analyzer, NetMiner 4.0. The analysis revealed 5 groups of newly emerging keywords in the research. By examining the relationship between keywords with reference to their betweenness centrality, the following clusters were identified. Thus if new researchers refer to existing trends to establish the subject of their study and the direction of the development of future research on metabolic syndrome intervention can be predicted.Keywords: intervention, metabolic syndrome, network analysis, research, the trend
Procedia PDF Downloads 20611492 Hybrid versus Cemented Fixation in Total Knee Arthroplasty: Mid-Term Follow-Up
Authors: Pedro Gomes, Luís Sá Castelo, António Lopes, Marta Maio, Pedro Mota, Adélia Avelar, António Marques Dias
Abstract:
Introduction: Total Knee Arthroplasty (TKA) has contributed to improvement of patient`s quality of life, although it has been associated with some complications including component loosening and polyethylene wear. To prevent these complications various fixation techniques have been employed. Hybrid TKA with cemented tibial and cementless femoral components have shown favourable outcomes, although it still lack of consensus in the literature. Objectives: To evaluate the clinical and radiographic results of hybrid versus cemented TKA with an average 5 years follow-up and analyse the survival rates. Methods: A retrospective study of 125 TKAs performed in 92 patients at our institution, between 2006 to 2008, with a minimum follow-up of 2 years. The same prosthesis was used in all knees. Hybrid TKA fixation was performed in 96 knees, with a mean follow-up of 4,8±1,7 years (range, 2–8,3 years) and 29 TKAs received fully cemented fixation with a mean follow-up of 4,9±1,9 years (range, 2-8,3 years). Selection for hybrid fixation was nonrandomized and based on femoral component fit. The Oxford Knee Score (OKS 0-48) was evaluated for clinical assessment and Knee Society Roentgenographic Evaluation Scoring System was used for radiographic outcome. The survival rate was calculated using the Kaplan-Meier method, with failures defined as revision of either the tibial or femoral component for aseptic failures and all-causes (aseptic and infection). Analysis of survivorship data was performed using the log-rank test. SPSS (v22) was the computer program used for statistical analysis. Results: The hybrid group consisted of 72 females (75%) and 24 males (25%), with mean age 64±7 years (range, 50-78 years). The preoperative diagnosis was osteoarthritis (OA) in 94 knees (98%), rheumatoid arthritis (RA) in 1 knee (1%) and Posttraumatic arthritis (PTA) in 1 Knee (1%). The fully cemented group consisted of 23 females (79%) and 6 males (21%), with mean age 65±7 years (range, 47-78 years). The preoperative diagnosis was OA in 27 knees (93%), PTA in 2 knees (7%). The Oxford Knee Scores were similar between the 2 groups (hybrid 40,3±2,8 versus cemented 40,2±3). The percentage of radiolucencies seen on the femoral side was slightly higher in the cemented group 20,7% than the hybrid group 11,5% p0.223. In the cemented group there were significantly more Zone 4 radiolucencies compared to the hybrid group (13,8% versus 2,1% p0,026). Revisions for all causes were performed in 4 of the 96 hybrid TKAs (4,2%) and 1 of the 29 cemented TKAs (3,5%). The reason for revision was aseptic loosening in 3 hybrid TKAs and 1 of the cemented TKAs. Revision was performed for infection in 1 hybrid TKA. The hybrid group demonstrated a 7 years survival rate of 93% for all-cause failures and 94% for aseptic loosening. No significant difference in survivorship was seen between the groups for all-cause failures or aseptic failures. Conclusions: Hybrid TKA yields similar intermediate-term results and survival rates as fully cemented total knee arthroplasty and remains a viable option in knee joint replacement surgery.Keywords: hybrid, survival rate, total knee arthroplasty, orthopaedic surgery
Procedia PDF Downloads 59711491 Effects of Virtual Reality on the Upper Extremity Spasticity and Motor Function in Patients with Stroke: A Single Blinded Randomized Controlled Trial
Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini, Omid Seyed Esmaeili, Rouzbeh Kezemi, Noushin Mehrbod, Nazanin Vahed, Tahereh Hajiahmad, Noureddin Nakhostin Ansari
Abstract:
Background: Stroke is a disabling neurological disease. Rehabilitative therapies are important treatment methods. This clinical trial was done to compare the effects of VR beside conventional rehabilitation versus conventional rehabilitation alone on spasticity and motor function in stroke patients. Materials and Methods: In this open-label randomized controlled clinical trial, 40 consecutive patients with stable first-ever ischemic stroke in the past three to 12 months that were referred to a rehabilitation clinic in Tehran, Iran, in 2020 were enrolled. After signing the informed written consent form, subjects were randomly assigned by block randomization of five in each block as cases with 1:1 into two groups of 20 cases; conventional plus VR therapy group: 45-minute conventional therapy session plus 15-minute VR therapy, and conventional group: 60-minute conventional therapy session. VR rehabilitation is designed and developed with different stages. Outcomes were modified Ashworth scale, recovery stage score for motor function, range of motion (ROM) of shoulder abduction/wrist extension, and patients’ satisfaction rate. Data were compared after study termination. Results: The satisfaction rate among the patients was significantly better in the combination group (P=0.003). Only wrist extension was varied between groups and was better in the combination group. The variables generally had a statistically significant difference (P < 0.05). Conclusion: Virtual reality plus conventional rehabilitation therapy is superior versus conventional rehabilitation alone on the wrist and elbow spasticity and motor function in patients with stroke.Keywords: stroke, virtual therapy, rehabilitation, treatment
Procedia PDF Downloads 23911490 Spatial Accessibility Analysis of Kabul City Public Transport
Authors: Mohammad Idrees Yusofzai, Hirobata Yasuhiro, Matsuo Kojiro
Abstract:
Kabul is the capital of Afghanistan. It is the focal point of educational, industrial, etc. of Afghanistan. Additionally, the population of Kabul has grown recently and will increase because of return of refugees and shifting of people from other province to Kabul city. However, this increase in population, the issues of urban congestion and other related problems of urban transportation in Kabul city arises. One of the problems is public transport (large buses) service and needs to be modified and enhanced especially large bus routes that are operating in each zone of the 22 zone of Kabul City. To achieve the above mentioned goal of improving public transport, Spatial Accessibility Analysis is one of the important attributes to assess the effectiveness of transportation system and urban transport policy of a city, because accessibility indicator as an alternative tool to support public policy that aims the reinforcement of sustainable urban space. The case study of this research compares the present model (present bus route) and the modified model of public transport. Furthermore, present model, the bus routes in most of the zones are active, however, with having low frequency and unpublished schedule, and accessibility result is analyzed in four cases, based on the variables of accessibility. Whereas in modified model all zones in Kabul is taken into consideration with having specified origin and high frequency. Indeed the number of frequencies is kept high; however, this number is based on the number of buses Millie Bus Enterprise Authority (MBEA) owns. The same approach of cases is applied in modified model to figure out the best accessibility for the modified model. Indeed, the modified model is having a positive impact in congestion level in Kabul city. Besides, analyses of person trip and trip distribution have been also analyzed because how people move in the study area by each mode of transportation. So, the general aims of this research are to assess the present movement of people, identify zones in need of public transport and assess equity level of accessibility in Kabul city. The framework of methodology used in this research is based on gravity analysis model of accessibility; besides, generalized cost (time) of travel and travel mode is calculated. The main data come from person trip survey, socio-economic characteristics, demographic data by Japan International Cooperation Agency, 2008, study of Kabul city and also from the previous researches on travel pattern and the remaining data regarding present bus line and routes have been from MBEA. In conclusion, this research explores zones where public transport accessibility level is high and where it is low. It was found that both models the downtown area or central zones of Kabul city is having high level accessibility. Besides, the present model is the most unfavorable compared with the modified model based on the accessibility analysis.Keywords: accessibility, bus generalized cost, gravity model, public transportation network
Procedia PDF Downloads 20111489 Financial Ethics: A Review of 2010 Flash Crash
Authors: Omer Farooq, Salman Ahmed Khan, Sadaf Khalid
Abstract:
Modern day stock markets have almost entirely became automated. Even though it means increased profits for the investors by algorithms acting upon the slightest price change in order of microseconds, it also has given birth to many ethical dilemmas in the sense that slightest mistake can cause people to lose all of their livelihoods. This paper reviews one such event that happened on May 06, 2010 in which $1 trillion dollars disappeared from the Dow Jones Industrial Average. We are going to discuss its various aspects and the ethical dilemmas that have arisen due to it.Keywords: flash crash, market crash, stock market, stock market crash
Procedia PDF Downloads 52411488 Numerical Study on Jatropha Oil Pool Fire Behavior in a Compartment
Authors: Avinash Chaudhary, Akhilesh Gupta, Surendra Kumar, Ravi Kumar
Abstract:
This paper presents the numerical study on Jatropha oil pool fire in a compartment. A fire experiment with jatropha oil was conducted in a compartment of size 4 m x 4 m x m to study the fire development and temperature distribution. Fuel is burned in the center of the compartment in a pool diameter of 0.5 m with an initial fuel depth of 0.045 m. Corner temperature in the compartment, doorway temperature and hot gas layer temperature at various locations are measured. Numerical simulations were carried out using Fire Dynamics Simulator (FDS) software at grid size of 0.05 m, 0.12 m and for performing simulation heat release rate of jatropha oil measured using mass loss method were inputted into FDS. Experimental results shows that like other fuel fires, the whole combustion process can be divided into four stages: initial stage, growth stage, steady profile or developed phase and decay stage. The fire behavior shows two zone profile where upper zone consists of mainly hot gases while lower zone is relatively at colder side. In this study, predicted temperatures from simulation are in good agreement in upper zone of compartment. Near the interface of hot and cold zone, deviations were reported between the simulated and experimental results which is probably due to the difference between the predictions of smoke layer height by FDS. Also, changing the grid size from 0.12 m to 0.05 m does not show any effect in temperatures at upper zone while in lower zone, grid size of 0.05 m showed satisfactory agreement with experimental results. Numerical results showed that calculated temperatures at various locations matched well with the experimental results. On the whole, an effective method is provided with reasonable results to study the burning characteristics of jatropha oil with numerical simulations.Keywords: jatropha oil, compartment fire, heat release rate, FDS (fire dynamics simulator), numerical simulation
Procedia PDF Downloads 26111487 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems
Authors: Georgi Y. Georgiev, Matthew Brouillet
Abstract:
This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.Keywords: complexity, self-organization, agent based modelling, efficiency
Procedia PDF Downloads 7111486 The Role of Disturbed Dry Afromontane Forest of Ethiopia for Biodiversity Conservation and Carbon Storage
Authors: Mindaye Teshome, Nesibu Yahya, Carlos Moreira Miquelino Eleto Torres, Pedro Manuel Villaa, Mehari Alebachew
Abstract:
Arbagugu forest is one of the remnant dry Afromontane forests under severe anthropogenic disturbances in central Ethiopia. Despite this fact, up-to-date information is lacking about the status of the forest and its role in climate change mitigation. In this study, we evaluated the woody species composition, structure, biomass, and carbon stock in this forest. We employed a systematic random sampling design and established fifty-three sample plots (20 × 100 m) to collect the vegetation data. A total of 37 woody species belonging to 25 families were recorded. The density of seedlings, saplings, and matured trees were 1174, 101, and 84 stems ha-1, respectively. The total basal area of trees with DBH (diameter at breast height) ≥ 2 cm was 21.3 m2 ha-1. The characteristic trees of dry Afromontane Forest such as Podocarpus falcatus, Juniperus procera, and Olea europaea subsp. cuspidata exhibited a fair regeneration status. On the contrary, the least abundant species Lepidotrichilia volkensii, Canthium oligocarpum, Dovyalis verrucosa, Calpurnia aurea, and Maesa lanceolata exhibited good regeneration status. Some tree species such as Polyscias fulva, Schefflera abyssinica, Erythrina brucei, and Apodytes dimidiata lack regeneration. The total carbon stored in the forest ranged between 6.3 Mg C ha-1 and 835.6 Mg C ha-1. This value is equivalent to 639.6 Mg C ha-1. The forest had a very low number of woody species composition and diversity. The regeneration study also revealed that a significant number of tree species had unsatisfactory regeneration status. Besides, the forest had a lower carbon stock density compared with other dry Afromontane forests. This implies the urgent need for forest conservation and restoration activities by the local government, conservation practitioners, and other concerned bodies to maintain the forest and sustain the various ecosystem goods and services provided by the Arbagugu forest.Keywords: aboveground biomass, forest regeneration, climate change, biodiversity conservation, restoration
Procedia PDF Downloads 11511485 FPGA Implementation of Adaptive Clock Recovery for TDMoIP Systems
Authors: Semih Demir, Anil Celebi
Abstract:
Circuit switched networks widely used until the end of the 20th century have been transformed into packages switched networks. Time Division Multiplexing over Internet Protocol (TDMoIP) is a system that enables Time Division Multiplexing (TDM) traffic to be carried over packet switched networks (PSN). In TDMoIP systems, devices that send TDM data to the PSN and receive it from the network must operate with the same clock frequency. In this study, it was aimed to implement clock synchronization process in Field Programmable Gate Array (FPGA) chips using time information attached to the packages received from PSN. The designed hardware is verified using the datasets obtained for the different carrier types and comparing the results with the software model. Field tests are also performed by using the real time TDMoIP system.Keywords: clock recovery on TDMoIP, FPGA, MATLAB reference model, clock synchronization
Procedia PDF Downloads 28211484 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries
Authors: Eva Masson, Andrea Kübler
Abstract:
Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG
Procedia PDF Downloads 4511483 Explantation of Osseo-Integrated Implant Using Electrosurgery and Ultrasonic Instrumentation
Authors: Stefano Andrea Denes
Abstract:
The use of dental implants to rehabilitate edentulous patients has become a well-established and effective treatment option; however, despite its high success rate, this treatment is not free of complications. The fracture of implant body is a rare cause of failure but when it does occur it can present technical challenges. In this article, we report the complete removal of a fractured osseointegrated implant using electrosurgery and ultrasonic instrumentation. The postoperative course was uneventful, no bleeding, infection, or hematoma formation was observed.Keywords: dental implant, oral surgery, electrosurgery, piezosurgery
Procedia PDF Downloads 27511482 Numerical Investigation of the Boundary Conditions at Liquid-Liquid Interfaces in the Presence of Surfactants
Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji
Abstract:
Liquid-liquid interfacial flow is an important process that has applications across many spheres. One such applications are residual oil mobilization, where crude oil and low salinity water are emulsified due to lowered interfacial tension under the condition of low shear rates. The amphiphilic components (asphaltenes and resins) in crude oil are considered to assemble at the interface between the two immiscible liquids. To justify emulsification, drag and snap-off suppression as the main effects of low salinity water, mobilization of residual oil is visualized as thickening and slip of the wetting phase at the brine/crude oil interface which results in the squeezing and drag of the non-wetting phase to the pressure sinks. Meanwhile, defining the boundary conditions for such a system can be very challenging since the interfacial dynamics do not only depend on interfacial tension but also the flow rate. Hence, understanding the flow boundary condition at the brine/crude oil interface is an important step towards defining the influence of low salinity water composition on residual oil mobilization. This work presents a numerical evaluation of three slip boundary conditions that may apply at liquid-liquid interfaces. A mathematical model was developed to describe the evolution of a viscoelastic interfacial thin liquid film. The base model is developed by the asymptotic expansion of the full Navier-Stokes equations for fluid motion due to gradients of surface tension. This model was upscaled to describe the dynamics of the film surface deformation. Subsequently, Jeffrey’s model was integrated into the formulations to account for viscoelastic stress within a long wave approximation of the Navier-Stokes equations. To study the fluid response to a prescribed disturbance, a linear stability analysis (LSA) was performed. The dispersion relation and the corresponding characteristic equation for the growth rate were obtained. Three slip (slip, 1; locking, -1; and no-slip, 0) boundary conditions were examined using the resulted characteristic equation. Also, the dynamics of the evolved interfacial thin liquid film were numerically evaluated by considering the influence of the boundary conditions. The linear stability analysis shows that the boundary conditions of such systems are greatly impacted by the presence of amphiphilic molecules when three different values of interfacial tension were tested. The results for slip and locking conditions are consistent with the fundamental solution representation of the diffusion equation where there is film decay. The interfacial films at both boundary conditions respond to exposure time in a similar manner with increasing growth rate which resulted in the formation of more droplets with time. Contrarily, no-slip boundary condition yielded an unbounded growth and it is not affected by interfacial tension.Keywords: boundary conditions, liquid-liquid interfaces, low salinity water, residual oil mobilization
Procedia PDF Downloads 13311481 A Constructed Wetland as a Reliable Method for Grey Wastewater Treatment in Rwanda
Authors: Hussein Bizimana, Osman Sönmez
Abstract:
Constructed wetlands are current the most widely recognized waste water treatment option, especially in developing countries where they have the potential for improving water quality and creating valuable wildlife habitat in ecosystem with treatment requirement relatively simple for operation and maintenance cost. Lack of grey waste water treatment facilities in Kigali İnstitute of Science and Technology in Rwanda, causes pollution in the surrounding localities of Rugunga sector, where already a problem of poor sanitation is found. In order to treat grey water produced at Kigali İnstitute of Science and Technology, with high BOD concentration, high nutrients concentration and high alkalinity; a Horizontal Sub-surface Flow pilot-scale constructed wetland was designed and can operate in Kigali İnstitute of Science and Technology. The study was carried out in a sedimentation tank of 5.5 m x 1.42 m x 1.2 m deep and a Horizontal Sub-surface constructed wetland of 4.5 m x 2.5 m x 1.42 m deep. The grey waste water flow rate of 2.5 m3/d flew through vegetated wetland and sandy pilot plant. The filter media consisted of 0.6 to 2 mm of coarse sand, 0.00003472 m/s of hydraulic conductivity and cattails (Typha latifolia spp) were used as plants species. The effluent flow rate of the plant is designed to be 1.5 m3/ day and the retention time will be 24 hrs. 72% to 79% of BOD, COD, and TSS removals are estimated to be achieved, while the nutrients (Nitrogen and Phosphate) removal is estimated to be in the range of 34% to 53%. Every effluent characteristic will meet exactly the Rwanda Utility Regulatory Agency guidelines primarily because the retention time allowed is enough to make the reduction of contaminants within effluent raw waste water. Treated water reuse system was developed where water will be used in the campus irrigation system again.Keywords: constructed wetlands, hydraulic conductivity, grey waste water, cattails
Procedia PDF Downloads 61111480 Performance Comparison of Joint Diagonalization Structure (JDS) Method and Wideband MUSIC Method
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
We simulate an efficient multiple wideband and nonstationary source localization algorithm by exploiting both the non-stationarity of the signals and the array geometric information.This algorithm is based on joint diagonalization structure (JDS) of a set of short time power spectrum matrices at different time instants of each frequency bin. JDS can be used for quick and accurate multiple non-stationary source localization. The JDS algorithm is a one stage process i.e it directly searches the Direction of arrivals (DOAs) over the continuous location parameter space. The JDS method requires that the number of sensors is not less than the number of sources. By observing the simulation results, one can conclude that the JDS method can localize two sources when their difference is not less than 7 degree but the Wideband MUSIC is able to localize two sources for difference of 18 degree.Keywords: joint diagonalization structure (JDS), wideband direction of arrival (DOA), wideband MUSIC
Procedia PDF Downloads 47111479 Ventilator Associated Pneumonia in a Medical Intensive Care Unit, Incidence and Risk Factors: A Case Control Study
Authors: Ammar Asma, Bouafia Nabiha, Ben Cheikh Asma, Ezzi Olfa, Mahjoub Mohamed, Sma Nesrine, Chouchène Imed, Boussarsar Hamadi, Njah Mansour
Abstract:
Background: Ventilator-associated pneumonia (VAP) is currently recognized as one of the most relevant causes of morbidity and mortality among intensive care unit (ICU) patients worldwide. Identifying modifiable risk factors for VAP could be helpful for future controlled interventional studies aiming at improving prevention of VAP. The purposes of this study were to determine the incidence and risk factors for VAP in in a Tunisian medical ICU. Materials / Methods: A retrospective case-control study design based on the prospective database collected over a 14-month period from September 15th, 2015 through November 15th, 2016 in an 8-bed medical ICU. Patients under ventilation for over 48 h were included. The number of cases was estimated by Epi-info Software with the power of statistical test equal to 90 %. Each case patient was successfully matched to two controls according to the length of mechanical ventilation (MV) before VAP for cases and the total length of MV in controls. VAP in the ICU was defined according to American Thoracic Society; Infectious Diseases Society of America guidelines. Early onset or late-onset VAP were defined whether the infectious process occurred within or after 96 h of ICU admission. Patients’ risk factors, causes of admission, comorbidities and respiratory specimens collected were reviewed. Univariate and multivariate analyses were performed to determine variables associated with VAP with a p-value < 0.05. Results: During the period study, a total of 169 patients under mechanical ventilation were considered, 34 patients (20.11%) developed at least one episode of VAP in the ICU. The incidence rate for VAP was 14.88/1000 ventilation days. Among these cases, 9 (26.5 %) were early-onset VAP and 25 (73.5 %) were late-onset VAP. It was a certain diagnosis in 66.7% of cases. Tracheal aspiration was positive in 80% of cases. Multi-drug resistant Acinerobacter baumanii was the most common species detected in cases; 67.64% (n=23). The rate of mortality out of cases was 88.23% (n= 30). In univariate analysis, the patients with VAP were statistically more likely to suffer from cardiovascular diseases (p=0.035) and prolonged duration of sedation (p=0.009) and tracheostomy (p=0.001), they also had a higher number of re-intubation (p=0.017) and a longer total time of intubation (p=0.012). Multivariate analysis showed that cardiovascular diseases (OR= 4.44; 95% IC= [1.3 - 14]; p=0.016), tracheostomy (OR= 4.2; 95% IC= [1.16 -15.12]; p= 0.028) and prolonged duration of sedation (OR=1.21; 95% IC= [1.07, 1.36]; p=0.002) were independent risk factors for the development of VAP. Conclusion: VAP constitutes a therapeutic challenge in an ICU setting, therefore; strategies that effectively prevent VAP are needed. An infection control-training program intended to all professional heath care in this unit insisting on bundles and elaboration of procedures are planned to reduce effectively incidence rate of VAP.Keywords: case control study, intensive care unit, risk factors, ventilator associated pneumonia
Procedia PDF Downloads 39711478 Uplift Segmentation Approach for Targeting Customers in a Churn Prediction Model
Authors: Shivahari Revathi Venkateswaran
Abstract:
Segmenting customers plays a significant role in churn prediction. It helps the marketing team with proactive and reactive customer retention. For the reactive retention, the retention team reaches out to customers who already showed intent to disconnect by giving some special offers. When coming to proactive retention, the marketing team uses churn prediction model, which ranks each customer from rank 1 to 100, where 1 being more risk to churn/disconnect (high ranks have high propensity to churn). The churn prediction model is built by using XGBoost model. However, with the churn rank, the marketing team can only reach out to the customers based on their individual ranks. To profile different groups of customers and to frame different marketing strategies for targeted groups of customers are not possible with the churn ranks. For this, the customers must be grouped in different segments based on their profiles, like demographics and other non-controllable attributes. This helps the marketing team to frame different offer groups for the targeted audience and prevent them from disconnecting (proactive retention). For segmentation, machine learning approaches like k-mean clustering will not form unique customer segments that have customers with same attributes. This paper finds an alternate approach to find all the combination of unique segments that can be formed from the user attributes and then finds the segments who have uplift (churn rate higher than the baseline churn rate). For this, search algorithms like fast search and recursive search are used. Further, for each segment, all customers can be targeted using individual churn ranks from the churn prediction model. Finally, a UI (User Interface) is developed for the marketing team to interactively search for the meaningful segments that are formed and target the right set of audience for future marketing campaigns and prevent them from disconnecting.Keywords: churn prediction modeling, XGBoost model, uplift segments, proactive marketing, search algorithms, retention, k-mean clustering
Procedia PDF Downloads 7411477 Spatial Audio Player Using Musical Genre Classification
Authors: Jun-Yong Lee, Hyoung-Gook Kim
Abstract:
In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing.Keywords: automatic equalization, genre classification, music segment detection, spatial audio processing
Procedia PDF Downloads 430