Search results for: large deviation distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12379

Search results for: large deviation distribution

2569 Development of a Predictive Model to Prevent Financial Crisis

Authors: Tengqin Han

Abstract:

Delinquency has been a crucial factor in economics throughout the years. Commonly seen in credit card and mortgage, it played one of the crucial roles in causing the most recent financial crisis in 2008. In each case, a delinquency is a sign of the loaner being unable to pay off the debt, and thus may cause a lost of property in the end. Individually, one case of delinquency seems unimportant compared to the entire credit system. China, as an emerging economic entity, the national strength and economic strength has grown rapidly, and the gross domestic product (GDP) growth rate has remained as high as 8% in the past decades. However, potential risks exist behind the appearance of prosperity. Among the risks, the credit system is the most significant one. Due to long term and a large amount of balance of the mortgage, it is critical to monitor the risk during the performance period. In this project, about 300,000 mortgage account data are analyzed in order to develop a predictive model to predict the probability of delinquency. Through univariate analysis, the data is cleaned up, and through bivariate analysis, the variables with strong predictive power are detected. The project is divided into two parts. In the first part, the analysis data of 2005 are split into 2 parts, 60% for model development, and 40% for in-time model validation. The KS of model development is 31, and the KS for in-time validation is 31, indicating the model is stable. In addition, the model is further validation by out-of-time validation, which uses 40% of 2006 data, and KS is 33. This indicates the model is still stable and robust. In the second part, the model is improved by the addition of macroeconomic economic indexes, including GDP, consumer price index, unemployment rate, inflation rate, etc. The data of 2005 to 2010 is used for model development and validation. Compared with the base model (without microeconomic variables), KS is increased from 41 to 44, indicating that the macroeconomic variables can be used to improve the separation power of the model, and make the prediction more accurate.

Keywords: delinquency, mortgage, model development, model validation

Procedia PDF Downloads 226
2568 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 164
2567 The Uses of Photodynamic Therapy versus Anti-vascular Endothelial Growth Factor in the Management of Acute Central Serous Chorioretinopathy: Systematic Review and Meta-Analysis

Authors: Hadeel Seraj, Mohammed Khoshhal, Mustafa Alhamoud, Hassan Alhashim, Anas Alsaif, Amro Abukhashabah

Abstract:

Central serous chorioretinopathy (CSCR) is an idiopathic retinal disease characterized by localized serous detachment of the neurosensory retina at the macula. To date, there is no high-quality evidence of recent updates on treating acute CSCR, focusing on photodynamic therapy (PDT) and anti-vascular endothelial growth factor (anti-VEGF). Hence, this review aims to systematically review the latest treatment strategies for acute CSCR. Methodology: The following electronic databases were used for a comprehensive and systematic literature review: MEDLINE, EMBASE, and Cochrane. In addition, we analyzed studies comparing PDT with placebo, anti-VEGF with placebo, or PDT with anti-VEGF in treating acute CSC eyes with no previous intervention. Results: Seven studies were included, with a total of 292 eyes. The overall positive results were significantly higher among patients who received PDT compared to control groups (OR = 7.96, 95% CI, 3.02 to 20.95, p < 0.001). The proportions of positive results were 81.0% and 97.1% among patients who received anti-VEGF and PDT, respectively, with no statistically significant differences between the groups. In addition, there were no significant differences between anti-VEGF and control groups. In contrast, PDT was significantly associated with lower recurrence odds than the control groups (OR = 0.12, 95% CI, 0.04 to 0.39, p = 0.042). Conclusion: According to our findings, PDT showed higher positive results than Anti-VEGF in acute CSCR. In addition, PDT was significantly associated with a lower recurrence rate than the control group. However, the analysis needs to be confirmed and updated by large-scale, well-designed RCTs.

Keywords: central serous chorioretinopathy, Acute CSCR, photodynamic therapy, anti-vascular endothelial growth factor

Procedia PDF Downloads 76
2566 Theoretical Prediction on the Lifetime of Sessile Evaporating Droplet in Blade Cooling

Authors: Yang Shen, Yongpan Cheng, Jinliang Xu

Abstract:

The effective blade cooling is of great significance for improving the performance of turbine. The mist cooling emerges as the promising way compared with the transitional single-phase cooling. In the mist cooling, the injected droplet will evaporate rapidly, and cool down the blade surface due to the absorbed latent heat, hence the lifetime for evaporating droplet becomes critical for design of cooling passages for the blade. So far there have been extensive studies on the droplet evaporation, but usually the isothermal model is applied for most of the studies. Actually the surface cooling effect can affect the droplet evaporation greatly, it can prolong the droplet evaporation lifetime significantly. In our study, a new theoretical model for sessile droplet evaporation with surface cooling effect is built up in toroidal coordinate. Three evaporation modes are analyzed during the evaporation lifetime, include “Constant Contact Radius”(CCR) mode、“Constant Contact Angle”(CCA) mode and “stick-slip”(SS) mode. The dimensionless number E0 is introduced to indicate the strength of the evaporative cooling, it is defined based on the thermal properties of the liquid and the atmosphere. Our model can predict accurately the lifetime of evaporation by validating with available experimental data. Then the temporal variation of droplet volume, contact angle and contact radius are presented under CCR, CCA and SS mode, the following conclusions are obtained. 1) The larger the dimensionless number E0, the longer the lifetime of three evaporation cases is; 2) The droplet volume over time still follows “2/3 power law” in the CCA mode, as in the isothermal model without the cooling effect; 3) In the “SS” mode, the large transition contact angle can reduce the evaporation time in CCR mode, and increase the time in CCA mode, the overall lifetime will be increased; 4) The correction factor for predicting instantaneous volume of the droplet is derived to predict the droplet life time accurately. These findings may be of great significance to explore the dynamics and heat transfer of sessile droplet evaporation.

Keywords: blade cooling, droplet evaporation, lifetime, theoretical analysis

Procedia PDF Downloads 141
2565 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 316
2564 The Vulnerability of Farmers in Valencia Negros Oriental to Climate Change: El Niño Phenomenon and Malnutrition

Authors: J. K. Pis-An

Abstract:

Objective: The purpose of the study was to examine the vulnerability of farmers to the effects of climate change, specifically the El Niño phenomenon was felt in the Philippines in 2009-2010. Methods: KAP Survey determines behavioral response to vulnerability to the effects of El Niño. Body Mass Index: Dietary Assessment using 24-hour food recall. Results: 75% of the respondents claimed that crop significantly decreased during drought. Indications that households of farmers are large where 51.6% are composed of 6-10 family members with 68% annual incomes below Php 100,00. Anthropometric assessment showed that the prevalence of Chronic Energy Deficiency Grade 1 among females 17% and 28.57% for low normal. While male body mass index result for chronic energy deficiency grade 1 10%, low normal 18.33% and and obese grade 1, 31.67%. Dietary assessment of macronutrient intake of carbohydrates, protein, and fat 31.6 % among respondents are below recommended amounts. Micronutrient deficiency of calcium, iron, vit. A, thiamine, riboflavin, niacin, and Vit. C. Conclusion: Majority of the rural populations are engaged into farming livelihood that makes up the backbone of their economic growth. Placing the current nutritional status of the farmers in the context of food security, there are reasons to believe that the status will go for worse if the extreme climatic conditions will once again prevail in the region. Farmers rely primarily on home grown crops for their food supply, a reduction in farm production during drought is expected to adversely affect dietary intake. The local government therefore institute programs to increase food resiliency and to prioritize health of the population as the moving force for productivity and development.

Keywords: world health organization, united nation framework convention on climate change, anthropometric, macronutrient, micronutrient

Procedia PDF Downloads 443
2563 Computational Fluid Dynamics Modeling of Physical Mass Transfer of CO₂ by N₂O Analogy Using One Fluid Formulation in OpenFOAM

Authors: Phanindra Prasad Thummala, Umran Tezcan Un, Ahmet Ozan Celik

Abstract:

Removal of CO₂ by MEA (monoethanolamine) in structured packing columns depends highly on the gas-liquid interfacial area and film thickness (liquid load). CFD (computational fluid dynamics) is used to find the interfacial area, film thickness and their impact on mass transfer in gas-liquid flow effectively in any column geometry. In general modeling approaches used in CFD derive mass transfer parameters from standard correlations based on penetration or surface renewal theories. In order to avoid the effect of assumptions involved in deriving the correlations and model the mass transfer based solely on fluid properties, state of art approaches like one fluid formulation is useful. In this work, the one fluid formulation was implemented and evaluated for modeling the physical mass transfer of CO₂ by N₂O analogy in OpenFOAM CFD software. N₂O analogy avoids the effect of chemical reactions on absorption and allows studying the amount of CO₂ physical mass transfer possible in a given geometry. The computational domain in the current study was a flat plate with gas and liquid flowing in the countercurrent direction. The effect of operating parameters such as flow rate, the concentration of MEA and angle of inclination on the physical mass transfer is studied in detail. Liquid side mass transfer coefficients obtained by simulations are compared to the correlations available in the literature and it was found that the one fluid formulation was effectively capturing the effects of interface surface instabilities on mass transfer coefficient with higher accuracy. The high mesh refinement near the interface region was found as a limiting reason for utilizing this approach on large-scale simulations. Overall, the one fluid formulation is found more promising for CFD studies involving the CO₂ mass transfer.

Keywords: one fluid formulation, CO₂ absorption, liquid mass transfer coefficient, OpenFOAM, N₂O analogy

Procedia PDF Downloads 219
2562 Anti-Parasite Targeting with Amino Acid-Capped Nanoparticles Modulates Multiple Cellular Processes in Host

Authors: Oluyomi Stephen Adeyemi, Kentaro Kato

Abstract:

Toxoplasma gondii is the etiological agent of toxoplasmosis, a common parasitic disease capable of infecting a range of hosts, including nearly one-third of the human population. Current treatment options for toxoplasmosis patients are limited. In consequence, toxoplasmosis represents a large global burden that is further enhanced by the shortcomings of the current therapeutic options. These factors underscore the need for better anti-T. gondii agents and/or new treatment approach. In the present study, we sought to find out whether preparing and capping nanoparticles (NPs) in amino acids, would enhance specificity toward the parasite versus the host cell. The selection of amino acids was premised on the fact that T. gondii is auxotrophic for some amino acids. The amino acid-nanoparticles (amino-NPs) were synthesized, purified and characterized following established protocols. Next, we tested to determine the anti-T. gondii activity of the amino-NPs using in vitro experimental model of infection. Overall, our data show evidence that supports enhanced and excellent selective action against the parasite versus the host cells by amino-NPs. The findings are promising and provide additional support that warrants exploring the prospects of NPs as alternative anti-parasite agents. In addition, the anti-parasite action by amino-NPs indicates that nutritional requirement of parasite may represent a viable target in the development of better alternative anti-parasite agents. Furthermore, data suggest the anti-parasite mechanism of the amino-NPs involves multiple cellular processes including the production of reactive oxygen species (ROS), modulation of hypoxia-inducing factor-1 alpha (HIF-1α) as well as the activation of kynurenine pathway. Taken together, findings highlight further, the prospects of NPs as alternative source of anti-parasite agents.

Keywords: drug discovery, infectious diseases, mode of action, nanomedicine

Procedia PDF Downloads 111
2561 Fraud in the Higher Educational Institutions in Assam, India: Issues and Challenges

Authors: Kalidas Sarma

Abstract:

Fraud is a social problem changing with social change and it has a regional and global impact. Introduction of private domain in higher education along with public institutions has led to commercialization of higher education which encourages unprecedented mushrooming of private institutions resulting in fraudulent activities in higher educational institutions in Assam, India. Presently, fraud has been noticed in in-service promotion, fake entry qualification by teachers in different levels of work-place by using fake master degrees, master of philosophy and doctor of philosophy degree certificates. The aim and objective of the study are to identify grey areas in maintenance of quality in higher educational institutions in Assam and also to draw the contour for planning and implementation. This study is based on both primary and secondary data collected through questionnaire and seeking information through Right to Information Act 2005. In Assam, there are 301 undergraduate and graduate colleges distributed in 27 (Twenty seven) administrative districts with 11000 (Eleven thousand) college teachers. Total 421 (Four hundred twenty one) college teachers from the 14 respondent colleges have been taken for analysis. Data collected has been analyzed by using 'Hypertext Pre-processor' (PhP) application with My Sequel Structure Query Language (MySQL) and Google Map Application Programming Interface (APIs). Graph has been generated by using open source tool Chart.js. Spatial distribution maps have been generated with the help of geo-references of the colleges. The result shows: (i) the violation of University Grants Commission's (UGCs) Regulation for the awards of M. Phil/Ph.D. clearly exhibits. (ii) There is a gap between apex regulatory bodies of higher education at national and as well as state level to check fraud. (iii) Mala fide 'No Objection Certificate' (NOC) issued by the Government of Assam have played pivotal role in the occurrence of fraudulent practices in higher educational institutions of Assam. (iv) Violation of verdict of the Hon'ble Supreme Court of India regarding territorial jurisdiction of Universities for the awards of Ph.D. and M. Phil degrees in distance mode/study centre is also a responsible factor for the spread of these academic frauds in Assam and other states. The challenges and mitigation of these issues have been discussed.

Keywords: Assam, fraud, higher education, mitigation

Procedia PDF Downloads 166
2560 Evaluation of Mito-Uncoupler Induced Hyper Metabolic and Aggressive Phenotype in Glioma Cells

Authors: Yogesh Rai, Saurabh Singh, Sanjay Pandey, Dhananjay K. Sah, B. G. Roy, B. S. Dwarakanath, Anant N. Bhatt

Abstract:

One of the most common signatures of highly malignant gliomas is their capacity to metabolize more glucose to lactic acid than normal brain tissues, even under normoxic conditions (Warburg effect), indicating that aerobic glycolysis is constitutively upregulated through stable genetic or epigenetic changes. However, oxidative phosphorylation (OxPhos) is also required to maintain the mitochondrial membrane potential for tumor cell survival. In the process of tumorigenesis, tumor cells during fastest growth rate exhibit both high glycolytic and high OxPhos. Therefore, metabolically reprogrammed cancer cells with combination of both aerobic glycolysis and altered OxPhos develop a robust metabolic phenotype, which confers a selective growth advantage. In our study, we grew the high glycolytic BMG-1 (glioma) cells with continuous exposure of mitochondrial uncoupler 2, 4, dinitro phenol (DNP) for 10 passages to obtain a phenotype of high glycolysis with enhanced altered OxPhos. We found that OxPhos modified BMG (OPMBMG) cells has similar growth rate and cell cycle distribution but high mitochondrial mass and functional enzymatic activity than parental cells. In in-vitro studies, OPMBMG cells showed enhanced invasion, proliferation and migration properties. Moreover, it also showed enhanced angiogenesis in matrigel plug assay. Xenografted tumors from OPMBMG cells showed reduced latent period, faster growth rate and nearly five folds reduction in the tumor take in nude mice compared to BMG-1 cells, suggesting that robust metabolic phenotype facilitates tumor formation and growth. OPMBMG cells which were found radio-resistant, showed enhanced radio-sensitization by 2-DG as compared to the parental BMG-1 cells. This study suggests that metabolic reprogramming in cancer cells enhances the potential of migration, invasion and proliferation. It also strengthens the cancer cells to escape the death processes, conferring resistance to therapeutic modalities. Our data also suggest that combining metabolic inhibitors like 2-DG with conventional therapeutic modalities can sensitize such metabolically aggressive cancer cells more than the therapies alone.

Keywords: 2-DG, BMG, DNP, OPM-BMG

Procedia PDF Downloads 224
2559 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites

Authors: S. D. El Wakil, M. Pladsen

Abstract:

Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.

Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites

Procedia PDF Downloads 388
2558 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 207
2557 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause

Abstract:

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Keywords: image processing, illumination equalization, shadow filtering, object detection

Procedia PDF Downloads 214
2556 In silico Statistical Prediction Models for Identifying the Microbial Diversity and Interactions Due to Fixed Periodontal Appliances

Authors: Suganya Chandrababu, Dhundy Bastola

Abstract:

Like in the gut, the subgingival microbiota plays a crucial role in oral hygiene, health, and cariogenic diseases. Human activities like diet, antibiotics, and periodontal treatments alter the bacterial communities, metabolism, and functions in the oral cavity, leading to a dysbiotic state and changes in the plaques of orthodontic patients. Fixed periodontal appliances hinder oral hygiene and cause changes in the dental plaques influencing the subgingival microbiota. However, the microbial species’ diversity and complexity pose a great challenge in understanding the taxa’s community distribution patterns and their role in oral health. In this research, we analyze the subgingival microbial samples from individuals with fixed dental appliances (metal/clear) using an in silico approach. We employ exploratory hypothesis-driven multivariate and regression analysis to shed light on the microbial community and its functional fluctuations due to dental appliances used and identify risks associated with complex disease phenotypes. Our findings confirm the changes in oral microbiota composition due to the presence and type of fixed orthodontal devices. We identified seven main periodontic pathogens, including Bacteroidetes, Actinobacteria, Proteobacteria, Fusobacteria, and Firmicutes, whose abundances were significantly altered due to the presence and type of fixed appliances used. In the case of metal braces, the abundances of Bacteroidetes, Proteobacteria, Fusobacteria, Candidatus saccharibacteria, and Spirochaetes significantly increased, while the abundance of Firmicutes and Actinobacteria decreased. However, in individuals With clear braces, the abundance of Bacteroidetes and Candidatus saccharibacteria increased. The highest abundance value (P-value=0.004 < 0.05) was observed with Bacteroidetes in individuals with the metal appliance, which is associated with gingivitis, periodontitis, endodontic infections, and odontogenic abscesses. Overall, the bacterial abundances decrease with clear type and increase with metal type of braces. Regression analysis further validated the multivariate analysis of variance (MANOVA) results, supporting the hypothesis that the presence and type of the fixed oral appliances significantly alter the bacterial abundance and composition.

Keywords: oral microbiota, statistical analysis, fixed or-thodontal appliances, bacterial abundance, multivariate analysis, regression analysis

Procedia PDF Downloads 192
2555 Surface Deformation Studies in South of Johor Using the Integration of InSAR and Resistivity Methods

Authors: Sirajo Abubakar, Ismail Ahmad Abir, Muhammad Sabiu Bala, Muhammad Mustapha Adejo, Aravind Shanmugaveloo

Abstract:

Over the years, land subsidence has been a serious threat mostly to urban areas. Land subsidence is the sudden sinking or gradual downward settling of the ground’s surface with little or no horizontal motion. In most areas, land subsidence is a slow process that covers a large area; therefore, it is sometimes left unnoticed. South of Johor is the area of interest for this project because it is going through rapid urbanization. The objective of this research is to evaluate and identify potential deformations in the south of Johor using integrated remote sensing and 2D resistivity methods. Synthetic aperture radar interferometry (InSAR) which is a remote sensing technique has the potential to map coherent displacements at centimeter to millimeter resolutions. Persistent scatterer interferometry (PSI) stacking technique was applied to Sentinel-1 data to detect the earth deformation in the study area. A dipole-dipole configuration resistivity profiling was conducted in three areas to determine the subsurface features in that area. This subsurface features interpreted were then correlated with the remote sensing technique to predict the possible causes of subsidence and uplifts in the south of Johor. Based on the results obtained, West Johor Bahru (0.63mm/year) and Ulu Tiram (1.61mm/year) are going through uplift due to possible geological uplift. On the other end, East Johor Bahru (-0.26mm/year) and Senai (-1.16mm/year) undergo subsidence due to possible fracture and granitic boulders loading. Land subsidence must be taken seriously as it can cause serious damages to infrastructures and human life. Monitoring land subsidence and taking preventive actions must be done to prevent any disasters.

Keywords: interferometric synthetic aperture radar, persistent scatter, minimum spanning tree, resistivity, subsidence

Procedia PDF Downloads 145
2554 Recovery of Selenium from Scrubber Sludge in Copper Process

Authors: Lakshmikanth Reddy, Bhavin Desai, Chandrakala Kari, Sanjay Sarkar, Pradeep Binu

Abstract:

The sulphur dioxide gases generated as a by-product of smelting and converting operations of copper concentrate contain selenium apart from zinc, lead, copper, cadmium, bismuth, antimony, and arsenic. The gaseous stream is treated in waste heat boiler, electrostatic precipitator and scrubbers to remove coarse particulate matter in order to produce commercial grade sulfuric acid. The gas cleaning section of the acid plant uses water to scrub the smelting gases. After scrubbing, the sludge settled at the bottom of the scrubber, was analyzed in present investigation. It was found to contain 30 to 40 wt% copper and selenium up to 40 wt% selenium. The sludge collected during blow-down is directly recycled to the smelter for copper recovery. However, the selenium is expected to again vaporize due to high oxidation potential during smelting and converting, causing accumulation of selenium in sludge. In present investigation, a roasting process has been developed to recover the selenium before the copper recovery from the sludge at smelter. Selenium is associated with copper in sludge as copper selenide, as determined by X-ray diffraction and electron microscopy. The thermodynamic and thermos-gravimetry study revealed that the copper selenide phase present in the sludge was amenable to oxidation at 600°C forming oxides of copper and selenium (Cu-Se-O). However, the dissociation of selenium from the copper oxide was made possible by sulfatation using sulfur dioxide between 450 to 600°C, resulting into the formation of CuSO₄ (s) and SeO₂ (g). Lab scale trials were carried out in vertical tubular furnace to determine the optimum roasting conditions with respect to roasting time, temperature and molar ratio of O₂:SO₂. Using these optimum conditions, selenium up to 90 wt% in the form of SeO₂ vapors could be recovered from the sludge in a large-scale commercial roaster. Roasted sludge free from the selenium and containing oxides and sulfates of copper could now be recycled in the smelter for copper recovery.

Keywords: copper, selenium, copper selenide, sludge, roasting, SeO₂

Procedia PDF Downloads 204
2553 Community Perceptions on Honey Quality in Tobacco Growing Areas in Kigoma Region, Tanzania

Authors: Pilly Kagosi, Cherestino Balama

Abstract:

Beekeeping plays major role in improving biodiversity, increasing household income, and crop production through pollination. Tobacco farming is also the main source of household income for smallholder farmers. In Kigoma, production of Tobacco has increased and is perceived to threaten honey quality. The study explored the perception of the community on quality of honey in tobacco and non tobacco growing areas. The study was conducted in Kigoma Region, Tanzania. District and Villages were purposively sampled based on large numbers of people dealing with beekeeping activities and tobacco farming. Socioeconomic data were collected and analysed using Statistical Package for Social Sciences and content analysis. The perception of stakeholders on honey quality was analysed using Likert scale. Majority of the respondents agreed that tobacco farming greatly affects honey quality because honey from beehives near tobacco farms test bitter and sometimes irritating, which was associated with nicotine content and agrochemicals applied to tobacco crops. Though they cannot differentiate honey bitterness from agrochemicals and bee fodders. Furthermore, it was revealed that chemicals applied to tobacco and vegetables have negative effect on the bees and honey quality. Respondents believe that setting bee hives near tobacco farms might contaminate honey and therefore affect its quality. Beekeepers are not aware of the nicotine content from other bee fodders like miombo of which do not have any effect on human beings. Actually, tobacco farming does not affect beekeeping activities in issue of quality when farmers follow proper management of tobacco flowers and proper handling of honey. Though, big challenge in tobacco farming is chemically applied to the crops and harvest bee fodders for curing tobacco. The study recommends training to community on proper management of tobacco and proper handling of bee products.

Keywords: community, honey, perceptions, tobacco

Procedia PDF Downloads 143
2552 Analysis of Brownfield Soil Contamination Using Local Government Planning Data

Authors: Emma E. Hellawell, Susan J. Hughes

Abstract:

BBrownfield sites are currently being redeveloped for residential use. Information on soil contamination on these former industrial sites is collected as part of the planning process by the local government. This research project analyses this untapped resource of environmental data, using site investigation data submitted to a local Borough Council, in Surrey, UK. Over 150 site investigation reports were collected and interrogated to extract relevant information. This study involved three phases. Phase 1 was the development of a database for soil contamination information from local government reports. This database contained information on the source, history, and quality of the data together with the chemical information on the soil that was sampled. Phase 2 involved obtaining site investigation reports for development within the study area and extracting the required information for the database. Phase 3 was the data analysis and interpretation of key contaminants to evaluate typical levels of contaminants, their distribution within the study area, and relating these results to current guideline levels of risk for future site users. Preliminary results for a pilot study using a sample of the dataset have been obtained. This pilot study showed there is some inconsistency in the quality of the reports and measured data, and careful interpretation of the data is required. Analysis of the information has found high levels of lead in shallow soil samples, with mean and median levels exceeding the current guidance for residential use. The data also showed elevated (but below guidance) levels of potentially carcinogenic polyaromatic hydrocarbons. Of particular concern from the data was the high detection rate for asbestos fibers. These were found at low concentrations in 25% of the soil samples tested (however, the sample set was small). Contamination levels of the remaining chemicals tested were all below the guidance level for residential site use. These preliminary pilot study results will be expanded, and results for the whole local government area will be presented at the conference. The pilot study has demonstrated the potential for this extensive dataset to provide greater information on local contamination levels. This can help inform regulators and developers and lead to more targeted site investigations, improving risk assessments, and brownfield development.

Keywords: Brownfield development, contaminated land, local government planning data, site investigation

Procedia PDF Downloads 136
2551 Change Detection of Water Bodies in Dhaka City: An Analysis Using Geographic Information System and Remote Sensing

Authors: M. Humayun Kabir, Mahamuda Afroze, K. Maudood Elahi

Abstract:

Since the late 1900s, unplanned and rapid urbanization processes have drastically altered the land, reduced water bodies, and decreased vegetation cover in the capital city of Bangladesh, Dhaka. The capitalist modes of urbanization results in the encroachment of the surface water bodies in this city. The main goal of this study is to investigate the change detection of water bodies in Dhaka city, analyzing spatial distribution of water bodies and calculating the changing rate of it. This effort aims to influence public policy for environmental justice initiatives around protecting water bodies for ensuring proper function of the urban ecosystem. This study accomplishes research goal by compiling satellite imageries into GIS software to understand the changes of water bodies in Dhaka city. The work focuses on the late 20th century to early 21st century to analyze this city before and after major infrastructural changes occurred in unplanned manner. The land use of the study area has been classified into four categories, and the areas of the different land use have been calculated using MS Excel and SPSS. The results reveal that the urbanization expanded from central to northern part and major encroachment occurred at the western and eastern part of the city. It has also been found that, in 1988, the total area of water bodies was 8935.38 hectares, and it gradually decreased, and in 1998, 2008, 2017, the total areas of water bodies reached 6065.73, 4853.32, 2077.56 hectares, respectively. Rapid population growth, unplanned urbanization, and industrialization have generated pressure to change the land use pattern in Dhaka city. These expansion processes are engulfing wetland, water bodies, and vegetation cover without considering environmental impact. In order to regain the wetland and surface water bodies, the concern authorities must implement laws and act as a legal instrument in this regard and take action against the violators of it. This research is the synthesis of time series data that provides a complete picture of the water body’s status of Dhaka city that might help to make plans and policies for water body conservation.

Keywords: ecosystem, GIS, industrialization, land use, remote sensing, urbanization

Procedia PDF Downloads 151
2550 Legal Problems with the Thai Political Party Establishment

Authors: Paiboon Chuwatthanakij

Abstract:

Each of the countries around the world has different ways of management and many of them depend on people to administrate their country. Thailand, for example, empowers the sovereignty of Thai people under constitution; however, our Thai voting system is not able to flow fast enough under the current Political management system. The sovereignty of Thai people is addressing this problem through representatives during current elections, in order to set a new policy for the countries ideology to change in the House and the Cabinet. This is particularly important in a democracy to be developed under our current political institution. The Organic Act on Political Parties 2007 is the establishment we have today that is causing confrontations within the establishment. There are many political parties that will soon be abolished. Many political parties have already been subsidized. This research study is to analyze the legal problems with the political party establishment under the Organic Act on Political Parties 2007. This will focus on the freedom of each political establishment compared to an effective political operation. Textbooks and academic papers will be referenced from studies home and abroad. The study revealed that Organic Act on Political Parties 2007 has strict provisions on the political structure over the number of members and the number of branches involved within political parties system. Such operations shall be completed within one year; but under the existing laws the small parties are not able to participate with the bigger parties. The cities are capable of fulfilling small political party requirements but fail to become coalesced because the current laws won't allow them to be united as one. It is important to allow all independent political parties to join our current political structure. Board members can’t help the smaller parties to become a large organization under the existing Thai laws. Creating a new establishment that functions efficiently throughout all branches would be one solution to these legal problems between all political parties. With this new operation, individual political parties can participate with the bigger parties during elections. Until current political institutions change their system to accommodate public opinion, these current Thai laws will continue to be a problem with all political parties in Thailand.

Keywords: coalesced, political party, sovereignty, elections

Procedia PDF Downloads 312
2549 Chikungunya Virus Detection Utilizing an Origami Based Electrochemical Paper Analytical Device

Authors: Pradakshina Sharma, Jagriti Narang

Abstract:

Due to the critical significance in the early identification of infectious diseases, electrochemical sensors have garnered considerable interest. Here, we develop a detection platform for the chikungunya virus by rationally implementing the extremely high charge-transfer efficiency of a ternary nanocomposite of graphene oxide, silver, and gold (G/Ag/Au) (CHIKV). Because paper is an inexpensive substrate and can be produced in large quantities, the use of electrochemical paper analytical device (EPAD) origami further enhances the sensor's appealing qualities. A cost-effective platform for point-of-care diagnostics is provided by paper-based testing. These types of sensors are referred to as eco-designed analytical tools due to their efficient production, usage of the eco-friendly substrate, and potential to reduce waste management after measuring by incinerating the sensor. In this research, the paper's foldability property has been used to develop and create 3D multifaceted biosensors that can specifically detect the CHIKVX-ray diffraction, scanning electron microscopy, UV-vis spectroscopy, and transmission electron microscopy (TEM) were used to characterize the produced nanoparticles. In this work, aptamers are used since they are thought to be a unique and sensitive tool for use in rapid diagnostic methods. Cyclic voltammetry (CV) and linear sweep voltammetry (LSV), which were both validated with a potentiostat, were used to measure the analytical response of the biosensor. The target CHIKV antigen was hybridized with using the aptamer-modified electrode as a signal modulation platform, and its presence was determined by a decline in the current produced by its interaction with an anionic mediator, Methylene Blue (MB). Additionally, a detection limit of 1ng/ml and a broad linear range of 1ng/ml-10µg/ml for the CHIKV antigen were reported.

Keywords: biosensors, ePAD, arboviral infections, point of care

Procedia PDF Downloads 92
2548 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 185
2547 Improving Graduate Student Writing Skills: Best Practices and Outcomes

Authors: Jamie Sundvall, Lisa Jennings

Abstract:

A decline in writing skills and abilities of students entering graduate school has become a focus for university systems within the United States. This decline has become a national trend that requires reflection on the intervention strategies used to address the deficit and unintended consequences as outcomes in the profession. Social work faculty is challenged to increase written scholarship within the academic setting. However, when a large number of students in each course have writing deficits, there is a shift from focus on content, ability to demonstrate competency, and application of core social work concepts. This qualitative study focuses on the experiences of online faculty who support increasing scholarship through writing and are following best practices preparing students academically to see improvements in written presentation in classroom work. This study outlines best practices to improve written academic presentation, especially in an online setting. The research also highlights how a student’s ability to show competency and application of concepts may be overlooked in the online setting. This can lead to new social workers who are prepared academically, but may unable to effectively advocate and document thought presentation in their writing. The intended progression of writing across all levels of higher education moves from summary, to application, and into abstract problem solving. Initial findings indicate that it is important to reflect on practices used to address writing deficits in terms of academic writing, competency, and application. It is equally important to reflect on how these methods of intervention impact a student post-graduation. Specifically, for faculty, it is valuable to assess a social worker’s ability to engage in continuity of documentation and advocacy at micro, mezzo, macro, and international levels of practice.

Keywords: intervention, professional impact, scholarship, writing

Procedia PDF Downloads 138
2546 A Serious Game to Upgrade the Learning of Organizational Skills in Nursing Schools

Authors: Benoit Landi, Hervé Pingaud, Jean-Benoit Culie, Michel Galaup

Abstract:

Serious games have been widely disseminated in the field of digital learning. They have proved their utility in improving skills through virtual environments that simulate the field where new competencies have to be improved and assessed. This paper describes how we created CLONE, a serious game whose purpose is to help nurses create an efficient work plan in a hospital care unit. In CLONE, the number of patients to take care of is similar to the reality of their job, going far beyond what is currently practiced in nurse school classrooms. This similarity with the operational field increases proportionally the number of activities to be scheduled. Moreover, very often, the team of nurses is composed of regular nurses and nurse assistants that must share the work with respect to the regulatory obligations. Therefore, on the one hand, building a short-term planning is a complex task with a large amount of data to deal with, and on the other, good clinical practices have to be systematically applied. We present how reference planning has been defined by addressing an optimization problem formulation using the expertise of teachers. This formulation ensures the gameplay feasibility for the scenario that has been produced and enhanced throughout the game design process. It was also crucial to steer a player toward a specific gaming strategy. As one of our most important learning outcomes is a clear understanding of the workload concept, its factual calculation for each caregiver along time and its inclusion in the nurse reasoning during planning elaboration are focal points. We will demonstrate how to modify the game scenario to create a digital environment in which these somewhat abstract principles can be understood and applied. Finally, we give input on an experience we had on a pilot of a thousand undergraduate nursing students.

Keywords: care planning, workload, game design, hospital nurse, organizational skills, digital learning, serious game

Procedia PDF Downloads 189
2545 Records of Lepidopteron Borers (Lepidoptera) on Stored Seeds of Indian Himalayan Conifers

Authors: Pawan Kumar, Pitamber Singh Negi

Abstract:

Many of the regeneration failures in conifers are often being attributed to heavy insect attack and pathogens during the period of seed formation and under storage conditions. Conifer berries and seed insects occur throughout the known range of the hosts and also limit the production of seed for nursery stock. On occasion, even entire seed crops are lost due to insect attacks. The berry and seeds of both the species have been found to be infected with insects. Recently, heavy damage to the berry and seeds of Juniper and Chilgoza Pine was observed in the field as well as in stored conditions, leading to reduction in the viability of seeds to germinate. Both the species are under great threat and regeneration of the species is very low. Due to lack of adequate literature, the study on the damage potential of seed insects was urgently required to know the exact status of the insect-pests attacking seeds/berries of both the pine species so as to develop pest management practices against the insect pests attack. As both the species are also under threat and are fighting for survival, so the study is important to develop management practices for the insect-pests of seeds/berries of Juniper and Chilgoza pine so as to evaluate in the nursery, as these species form major vegetation of their distribution zones. A six-year study on the management of insect pests of seeds of Chilgoza revealed that seeds of this species are prone to insect pests mainly borers. During present investigations, it was recorded that cones of are heavily attacked only by Dioryctria abietella (Lepidoptera: Pyralidae) in natural conditions, but seeds which are economically important are heavily infected, (sometimes up to 100% damage was also recorded) by insect borer, Plodia interpunctella (Lepidoptera: Pyralidae) and is recorded for the first time ‘to author’s best knowledge’ infesting the stored Chilgoza seeds. Similarly, Juniper berries and seeds were heavily attacked only by a single borer, Homaloxestis cholopis (Lepidoptera: Lecithoceridae) recorded as a new report in natural habitat as well as in stored conditions. During the present investigation details of insect pest attack on Juniper and Chilgoza pine seeds and berries was observed and suitable management practices were also developed to contain the insect-pests attack.

Keywords: borer, chilgozapine, cones, conifer, Lepidoptera, juniper, management, seed

Procedia PDF Downloads 146
2544 Information Visualization Methods Applied to Nanostructured Biosensors

Authors: Osvaldo N. Oliveira Jr.

Abstract:

The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.

Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique

Procedia PDF Downloads 335
2543 Modified 'Perturb and Observe' with 'Incremental Conductance' Algorithm for Maximum Power Point Tracking

Authors: H. Fuad Usman, M. Rafay Khan Sial, Shahzaib Hamid

Abstract:

The trend of renewable energy resources has been amplified due to global warming and other environmental related complications in the 21st century. Recent research has very much emphasized on the generation of electrical power through renewable resources like solar, wind, hydro, geothermal, etc. The use of the photovoltaic cell has become very public as it is very useful for the domestic and commercial purpose overall the world. Although a single cell gives the low voltage output but connecting a number of cells in a series formed a complete module of the photovoltaic cells, it is becoming a financial investment as the use of it fetching popular. This also reduced the prices of the photovoltaic cell which gives the customers a confident of using this source for their electrical use. Photovoltaic cell gives the MPPT at single specific point of operation at a given temperature and level of solar intensity received at a given surface whereas the focal point changes over a large range depending upon the manufacturing factor, temperature conditions, intensity for insolation, instantaneous conditions for shading and aging factor for the photovoltaic cells. Two improved algorithms have been proposed in this article for the MPPT. The widely used algorithms are the ‘Incremental Conductance’ and ‘Perturb and Observe’ algorithms. To extract the maximum power from the source to the load, the duty cycle of the convertor will be effectively controlled. After assessing the previous techniques, this paper presents the improved and reformed idea of harvesting maximum power point from the photovoltaic cells. A thoroughly go through of the previous ideas has been observed before constructing the improvement in the traditional technique of MPP. Each technique has its own importance and boundaries at various weather conditions. An improved technique of implementing the use of both ‘Perturb and Observe’ and ‘Incremental Conductance’ is introduced.

Keywords: duty cycle, MPPT (Maximum Power Point Tracking), perturb and observe (P&O), photovoltaic module

Procedia PDF Downloads 176
2542 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 370
2541 A Flexible Real-Time Eco-Drive Strategy for Electric Minibus

Authors: Felice De Luca, Vincenzo Galdi, Piera Stella, Vito Calderaro, Adriano Campagna, Antonio Piccolo

Abstract:

Sustainable mobility has become one of the major issues of recent years. The challenge in reducing polluting emissions as much as possible has led to the production and diffusion of vehicles with internal combustion engines that are less polluting and to the adoption of green energy vectors, such as vehicles powered by natural gas or LPG and, more recently, with hybrid and electric ones. While on the one hand, the spread of electric vehicles for private use is becoming a reality, albeit rather slowly, not the same is happening for vehicles used for public transport, especially those that operate in the congested areas of the cities. Even if the first electric buses are increasingly being offered on the market, it remains central to the problem of autonomy for battery fed vehicles with high daily routes and little time available for recharging. In fact, at present, solid-state batteries are still too large in size, heavy, and unable to guarantee the required autonomy. Therefore, in order to maximize the energy management on the vehicle, the optimization of driving profiles offer a faster and cheaper contribution to improve vehicle autonomy. In this paper, following the authors’ precedent works on electric vehicles in public transport and energy management strategies in the electric mobility area, an eco-driving strategy for electric bus is presented and validated. Particularly, the characteristics of the prototype bus are described, and a general-purpose eco-drive methodology is briefly presented. The model is firstly simulated in MATLAB™ and then implemented on a mobile device installed on-board of a prototype bus developed by the authors in a previous research project. The solution implemented furnishes the bus-driver suggestions on the guide style to adopt. The result of the test in a real case will be shown to highlight the effectiveness of the solution proposed in terms of energy saving.

Keywords: eco-drive, electric bus, energy management, prototype

Procedia PDF Downloads 140
2540 Population Pharmacokinetics of Levofloxacin and Moxifloxacin, and the Probability of Target Attainment in Ethiopian Patients with Multi-Drug Resistant Tuberculosis

Authors: Temesgen Sidamo, Prakruti S. Rao, Eleni Akllilu, Workineh Shibeshi, Yumi Park, Yong-Soon Cho, Jae-Gook Shin, Scott K. Heysell, Stellah G. Mpagama, Ephrem Engidawork

Abstract:

The fluoroquinolones (FQs) are used off-label for the treatment of multidrug-resistant tuberculosis (MDR-TB), and for evaluation in shortening the duration of drug-susceptible TB in recently prioritized regimens. Within the class, levofloxacin (LFX) and moxifloxacin (MXF) play a substantial role in ensuring success in treatment outcomes. However, sub-therapeutic plasma concentrations of either LFX or MXF may drive unfavorable treatment outcomes. To the best of our knowledge, the pharmacokinetics of LFX and MXF in Ethiopian patients with MDR-TB have not yet been investigated. Therefore, the aim of this study was to develop a population pharmacokinetic (PopPK) model of levofloxacin (LFX) and moxifloxacin (MXF) and assess the percent probability of target attainment (PTA) as defined by the ratio of the area under the plasma concentration-time curve over 24-h (AUC0-24) and the in vitro minimum inhibitory concentration (MIC) (AUC0-24/MIC) in Ethiopian MDR-TB patients. Steady-state plasma was collected from 39 MDR-TB patients enrolled in the programmatic treatment course and the drug concentrations were determined using optimized liquid chromatography-tandem mass spectrometry. In addition, the in vitro MIC of the patients' pretreatment clinical isolates was determined. PopPK and simulations were run at various doses, and PK parameters were estimated. The effect of covariates on the PK parameters and the PTA for maximum mycobacterial kill and resistance prevention was also investigated. LFX and MXF both fit in a one-compartment model with adjustments. The apparent volume of distribution (V) and clearance (CL) of LFX were influenced by serum creatinine (Scr), whereas the absorption constant (Ka) and V of MXF were influenced by Scr and BMI, respectively. The PTA for LFX maximal mycobacterial kill at the critical MIC of 0.5 mg/L was 29%, 62%, and 95% with the simulated 750 mg, 1000 mg, and 1500 mg doses, respectively, whereas the PTA for resistance prevention at 1500 mg was only 4.8%, with none of the lower doses achieving this target. At the critical MIC of 0.25 mg/L, there was no difference in the PTA (94.4%) for maximum bacterial kill among the simulated doses of MXF (600 mg, 800 mg, and 1000 mg), but the PTA for resistance prevention improved proportionately with dose. Standard LFX and MXF doses may not provide adequate drug exposure. LFX PopPK is more predictable for maximum mycobacterial kill, whereas MXF's resistance prevention target increases with dose. Scr and BMI are likely to be important covariates in dose optimization or therapeutic drug monitoring (TDM) studies in Ethiopian patients.

Keywords: population PK, PTA, moxifloxacin, levofloxacin, MDR-TB patients, ethiopia

Procedia PDF Downloads 119