Search results for: channel estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3043

Search results for: channel estimation

343 Assessment of Climate Change Impacts on the Hydrology of Upper Guder Catchment, Upper Blue Nile

Authors: Fikru Fentaw Abera

Abstract:

Climate changes alter regional hydrologic conditions and results in a variety of impacts on water resource systems. Such hydrologic changes will affect almost every aspect of human well-being. The goal of this paper is to assess the impact of climate change on the hydrology of Upper Guder catchment located in northwest of Ethiopia. The GCM derived scenarios (HadCM3 A2a & B2a SRES emission scenarios) experiments were used for the climate projection. The statistical downscaling model (SDSM) was used to generate future possible local meteorological variables in the study area. The down-scaled data were then used as input to the soil and water assessment tool (SWAT) model to simulate the corresponding future stream flow regime in Upper Guder catchment of the Abay River Basin. A semi distributed hydrological model, SWAT was developed and Generalized Likelihood Uncertainty Estimation (GLUE) was utilized for uncertainty analysis. GLUE is linked with SWAT in the Calibration and Uncertainty Program known as SWAT-CUP. Three benchmark periods simulated for this study were 2020s, 2050s and 2080s. The time series generated by GCM of HadCM3 A2a and B2a and Statistical Downscaling Model (SDSM) indicate a significant increasing trend in maximum and minimum temperature values and a slight increasing trend in precipitation for both A2a and B2a emission scenarios in both Gedo and Tikur Inch stations for all three bench mark periods. The hydrologic impact analysis made with the downscaled temperature and precipitation time series as input to the hydrological model SWAT suggested for both A2a and B2a emission scenarios. The model output shows that there may be an annual increase in flow volume up to 35% for both emission scenarios in three benchmark periods in the future. All seasons show an increase in flow volume for both A2a and B2a emission scenarios for all time horizons. Potential evapotranspiration in the catchment also will increase annually on average 3-15% for the 2020s and 7-25% for the 2050s and 2080s for both A2a and B2a emissions scenarios.

Keywords: climate change, Guder sub-basin, GCM, SDSM, SWAT, SWAT-CUP, GLUE

Procedia PDF Downloads 331
342 Media Impression and Its Impact on Foreign Policy Making: A Study of India-China Relations

Authors: Rosni Lakandri

Abstract:

With the development of science and technology, there has been a complete transformation in the domain of information technology. Particularly after the Second World War and Cold War period, the role of media and communication technology in shaping the political, economic, socio-cultural proceedings across the world has been tremendous. It performs as a channel between the governing bodies of the state and the general masses. As we have seen the international community constantly talking about the onset of Asian Century, India and China happens to be the major player in this. Both have the civilization history, both are neighboring countries, both are witnessing a huge economic growth and, important of all, both are considered the rising powers of Asia. Not negating the fact that both countries have gone to war with each other in 1962 and the common people and even the policy makers of both the sides view each other till now from this prism. A huge contribution to this perception of people goes to the media coverage of both sides, even if there are spaces of cooperation which they share, the negative impacts of media has tended to influence the people’s opinion and government’s perception about each other. Therefore, analysis of media’s impression in both the countries becomes important in order to know their effect on the larger implications of foreign policy towards each other. It is usually said that media not only acts as the information provider but also acts as ombudsman to the government. They provide a kind of check and balance to the governments in taking proper decisions for the people of the country but in attempting to answer this hypothesis we have to analyze does the media really helps in shaping the political landscape of any country? Therefore, this study rests on the following questions; 1.How do China and India depict each other through their respective News media? 2.How much and what influences they make on the policy making process of each country? How do they shape the public opinion in both the countries? In order to address these enquiries, the study employs both primary and secondary sources available, and in generating data and other statistical information, primary sources like reports, government documents, and cartography, agreements between the governments have been used. Secondary sources like books, articles and other writings collected from various sources and opinion from visual media sources like news clippings, videos in this topic are also included as a source of on ground information as this study is not based on field study. As the findings suggest in case of China and India, media has certainly affected people’s knowledge about the political and diplomatic issues at the same time has affected the foreign policy making of both the countries. They have considerable impact on the foreign policy formulation and we can say there is some mediatization happening in foreign policy issues in both the countries.

Keywords: China, foreign policy, India, media, public opinion

Procedia PDF Downloads 132
341 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform

Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy

Abstract:

A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.

Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing

Procedia PDF Downloads 149
340 Effective Medium Approximations for Modeling Ellipsometric Responses from Zinc Dialkyldithiophosphates (ZDDP) Tribofilms Formed on Sliding Surfaces

Authors: Maria Miranda-Medina, Sara Salopek, Andras Vernes, Martin Jech

Abstract:

Sliding lubricated surfaces induce the formation of tribofilms that reduce friction, wear and prevent large-scale damage of contact parts. Engine oils and lubricants use antiwear and antioxidant additives such as zinc dialkyldithiophosphate (ZDDP) from where protective tribofilms are formed by degradation. The ZDDP tribofilms are described as a two-layer structure composed of inorganic polymer material. On the top surface, the long chain polyphosphate is a zinc phosphate and in the bulk, the short chain polyphosphate is a mixed Fe/Zn phosphate with a gradient concentration. The polyphosphate chains are partially adherent to steel surface through a sulfide and work as anti-wear pads. In this contribution, ZDDP tribofilms formed on gray cast iron surfaces are studied. The tribofilms were generated in a reciprocating sliding tribometer with a piston ring-cylinder liner configuration. Fully formulated oil of SAE grade 5W-30 was used as lubricant during two tests at 40Hz and 50Hz. For the estimation of the tribofilm thicknesses, spectroscopic ellipsometry was used due to its high accuracy and non-destructive nature. Ellipsometry works under an optical principle where the change in polarisation of light reflected by the surface, is associated with the refractive index of the surface material or to the thickness of the layer deposited on top. Ellipsometrical responses derived from tribofilms are modelled by effective medium approximation (EMA), which includes the refractive index of involved materials, homogeneity of the film and thickness. The materials composition was obtained from x-ray photoelectron spectroscopic studies, where the presence of ZDDP, O and C was confirmed. From EMA models it was concluded that tribofilms formed at 40 Hz are thicker and more homogeneous than the ones formed at 50 Hz. In addition, the refractive index of each material is mixed to derive an effective refractive index that describes the optical composition of the tribofilm and exhibits a maximum response in the UV range, being a characteristic of glassy semitransparent films.

Keywords: effective medium approximation, reciprocating sliding tribometer, spectroscopic ellipsometry, zinc dialkyldithiophosphate

Procedia PDF Downloads 226
339 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 176
338 Yield Loss Estimation Using Multiple Drought Severity Indices

Authors: Sara Tokhi Arab, Rozo Noguchi, Tofeal Ahamed

Abstract:

Drought is a natural disaster that occurs in a region due to a lack of precipitation and high temperatures over a continuous period or in a single season as a consequence of climate change. Precipitation deficits and prolonged high temperatures mostly affect the agricultural sector, water resources, socioeconomics, and the environment. Consequently, it causes agricultural product loss, food shortage, famines, migration, and natural resources degradation in a region. Agriculture is the first sector affected by drought. Therefore, it is important to develop an agricultural drought risk and loss assessment to mitigate the drought impact in the agriculture sector. In this context, the main purpose of this study was to assess yield loss using composite drought indices in the drought-affected vineyards. In this study, the CDI was developed for the years 2016 to 2020 by comprising five indices: the vegetation condition index (VCI), temperature condition index (TCI), deviation of NDVI from the long-term mean (NDVI DEV), normalized difference moisture index (NDMI) and precipitation condition index (PCI). Moreover, the quantitative principal component analysis (PCA) approach was used to assign a weight for each input parameter, and then the weights of all the indices were combined into one composite drought index. Finally, Bayesian regularized artificial neural networks (BRANNs) were used to evaluate the yield variation in each affected vineyard. The composite drought index result indicated the moderate to severe droughts were observed across the Kabul Province during 2016 and 2018. Moreover, the results showed that there was no vineyard in extreme drought conditions. Therefore, we only considered the severe and moderated condition. According to the BRANNs results R=0.87 and R=0.94 in severe drought conditions for the years of 2016 and 2018 and the R= 0.85 and R=0.91 in moderate drought conditions for the years of 2016 and 2018, respectively. In the Kabul Province within the two years drought periods, there was a significate deficit in the vineyards. According to the findings, 2018 had the highest rate of loss almost -7 ton/ha. However, in 2016 the loss rates were about – 1.2 ton/ha. This research will support stakeholders to identify drought affect vineyards and support farmers during severe drought.

Keywords: grapes, composite drought index, yield loss, satellite remote sensing

Procedia PDF Downloads 125
337 Liraglutide Augments Extra Body Weight Loss after Sleeve Gastrectomy without Change in Intrahepatic and Intra-Pancreatic Fat in Obese Individuals: Randomized, Controlled Study

Authors: Ashu Rastogi, Uttam Thakur, Jimmy Pathak, Rajesh Gupta, Anil Bhansali

Abstract:

Introduction: Liraglutide is known to induce weight loss and metabolic benefits in obese individuals. However, its effect after sleeve gastrectomy are not known. Methods: People with obesity (BMI>27.5 kg/m2) underwent LSG. Subsequently, participants were randomized to receive either 0.6mg liraglutide subcutaneously daily from 6 week post to be continued till 24 week (L-L group) or placebo (L-P group). Patients were assessed before surgery (baseline) and 6 weeks, 12weeks, 18weeks and 24weeks after surgery for height, weight, waist and hip circumference, BMI, body fat percentage, HbA1c, fasting C-peptide, fasting insulin, HOMA-IR, HOMA-β, GLP-1 levels (after standard OGTT). MRI abdomen was performed prior to surgery and at 24weeks post operatively for the estimation of intrapancreatic and intrahepatic fat content. Outcome measures: Primary outcomes were changes in metabolic variables of fasting and stimulated GLP-1 levels, insulin, c-peptide, plasma glucose levels. Secondary variables were indices of insulin resistance HOMA-IR, Matsuda index; and pancreatic and hepatic steatosis. Results: Thirty-eight patients undergoing LSG were screened and 29 participants were enrolled. Two patients withdrew consent and one patient died of acute coronary event. 26 patients were randomized and data analysed. Median BMI was 40.73±3.66 and 46.25±6.51; EBW of 49.225±11.14 and 651.48±4.85 in the L-P and L-L group, respectively. Baseline FPG was 132±51.48, 125±39.68; fasting insulin 21.5±13.99, 13.15±9.20, fasting GLP-1 2.4± .37, 2.4± .32, AUC GLP-1 340.78± 44 and 332.32 ± 44.1, HOMA-IR 7.0±4.2 and 4.42±4.5 in the L-P and L-L group, respectively. EBW loss was 47± 13.20 and 65.59± 24.20 (p<0.05) in the placebo versus liraglutide group. However, we did not observe inter-group difference in metabolic parameters between the groups in spite of significant intra-group changes after 6 months of LSG. Intra-pancreatic fat prior to surgery was 3.21±1.7 and 2.2±0.9 (p=0.38) that decreased to 2.14±1.8 and 1.06±0.8 (p=0.25) at 6 months in L-P and L-L group, respectively. Similarly, intra-pancreatic fat was 1.97±0.27 and 1.88±0.36 (p=0.361) at baseline that decreased to 1.14±0.44 and 1.36±0.47 (p=0.465) at 6 months in L-P and L-L group, respectively. Conclusion: Liraglutide augments extra body weight loss after sleeve gastrectomy. A decrease in intra-pancreatic and intra-hepatic fat is noticed after bariatric surgery without additive benefit of liraglutide administration.

Keywords: sleeve gastrectomy, liraglutide, intra-pancreatic fat, insulin

Procedia PDF Downloads 168
336 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 59
335 Aten Years Rabies Data Exposure and Death Surveillance Data Analysis in Tigray Region, Ethiopia, 2023

Authors: Woldegerima G. Medhin, Tadele Araya

Abstract:

Background: Rabies is acute viral encephalitis affecting mainly carnivores and insectivorous but can affect any mammal. Case fatality rate is 100% once clinical signs appear. Rabies has a worldwide distribution in continental regions of Asia and Africa. Globally, rabies is responsible for more than 61000 human deaths annually. An estimation of human mortality rabies in Asia and Africa annually exceed 35172 and 21476 respectively. Ethiopia approximately 2900 people were estimated to die of rabies annually, Tigary region approximately 98 people were estimated to die annually. The aim of this study is to analyze trends, describe, and evaluate the ten years rabies data in Tigray, Ethiopia. Methods: We conducted descriptive epidemiological study from 15-30 February, 2023 of rabies exposure and death in humans by reviewing the health management information system report from Tigray Regional Health Bureau and vaccination coverage of dog population from 2013 to 2022. We used case definition, suspected cases are those bitten by the dogs displaying clinical signs consistent with rabies and confirmed cases were deaths from rabies at time of the exposure. Results: A total 21031 dog bites and 375 deaths report of rabies and 18222 post exposure treatments for humans in Tigray region were used. A suspected rabies patients had shown an increasing trend from 2013 to 2015 and 2018 to 2019. Overall mortality rate was 19/1000 in Tigray. Majority of suspected patients (45%) were age <15 years old. An estimated by Agriculture Bureau of Tigray Region about 12000 owned and 2500 stray dogs are available in the region, but yearly dog vaccination remains low (50%). Conclusion: Rabies is a public health problem in Tigray region. It is highly recommended to vaccinate individually owned dogs and concerned sectors should eliminate stray dogs. Surveillance system should strengthen for estimating the real magnitude, launch preventive and control measures.

Keywords: rabies, Virus, transmision, prevalence

Procedia PDF Downloads 44
334 Toxicity of PPCPs on Adapted Sludge Community

Authors: G. Amariei, K. Boltes, R. Rosal, P. Leton

Abstract:

Wastewater treatment plants (WWTPs) are supposed to hold an important place in the reduction of emerging contaminants, but provide an environment that has potential for the development and/or spread of adaptation, as bacteria are continuously mixed with contaminants at sub-inhibitory concentrations. Reviewing the literature, there are little data available regarding the use of adapted bacteria forming activated sludge community for toxicity assessment, and only individual validations have been performed. Therefore, the aim of this work was to study the toxicity of Triclosan (TCS) and Ibuprofen (IBU), individually and in binary combination, on adapted activated sludge (AS). For this purpose a battery of biomarkers were assessed, involving oxidative stress and cytotoxicity responses: glutation-S-transferase (GST), catalase (CAT) and viable cells with FDA. In addition, we compared the toxic effects on adapted bacteria with unadapted bacteria, from a previous research. Adapted AS comes from three continuous-flow AS laboratory systems; two systems received IBU and TCS, individually; while the other received the binary combination, for 14 days. After adaptation, each bacterial culture condition was exposure to IBU, TCS and the combination, at 12 h. The concentration of IBU and TCS ranged 0.5-4mg/L and 0.012-0.1 mg/L, respectively. Batch toxicity experiments were performed using Oxygraph system (Hansatech), for determining the activity of CAT enzyme based on the quantification of oxygen production rate. Fluorimetric technique was applied as well, using a Fluoroskan Ascent Fl (Thermo) for determining the activity of GST enzyme, using monochlorobimane-GSH as substrate, and to the estimation of viable cell of the sludge, by fluorescence staining using Fluorescein Diacetate (FDA). For IBU adapted sludge, CAT activity it was increased at low concentration of IBU, TCS and mixture. However, increasing the concentration the behavior was different: while IBU tends to stabilize the CAT activity, TCS and the mixture decreased this one. GST activity was significantly increased by TCS and mixture. For IBU, no variations it was observed. For TCS adapted sludge, no significant variations on CAT activity it was observed. GST activity it was significant decreased for all contaminants. For mixture adapted sludge the behaviour of CAT activity it was similar to IBU adapted sludge. GST activity it was decreased at all concentration of IBU. While the presence of TCS and mixture, respectively, increased the GST activity. These findings were consistent with the viability cells evaluation, which clearly showed a variation of sludge viability. Our results suggest that, compared with unadapted bacteria, the adapted bacteria conditions plays a relevant role in the toxicity behaviour towards activated sludge communities.

Keywords: adapted sludge community, mixture, PPCPs, toxicity

Procedia PDF Downloads 378
333 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 84
332 Deep Mill Level Zone (DMLZ) of Ertsberg East Skarn System, Papua; Correlation between Structure and Mineralization to Determined Characteristic Orebody of DMLZ Mine

Authors: Bambang Antoro, Lasito Soebari, Geoffrey de Jong, Fernandy Meiriyanto, Michael Siahaan, Eko Wibowo, Pormando Silalahi, Ruswanto, Adi Budirumantyo

Abstract:

The Ertsberg East Skarn System (EESS) is located in the Ertsberg Mining District, Papua, Indonesia. EESS is a sub-vertical zone of copper-gold mineralization hosted in both diorite (vein-style mineralization) and skarn (disseminated and vein style mineralization). Deep Mill Level Zone (DMLZ) is a mining zone in the lower part of East Ertsberg Skarn System (EESS) that product copper and gold. The Deep Mill Level Zone deposit is located below the Deep Ore Zone deposit between the 3125m to 2590m elevation, measures roughly 1,200m in length and is between 350 and 500m in width. DMLZ planned start mined on Q2-2015, being mined at an ore extraction rate about 60,000 tpd by the block cave mine method (the block cave contain 516 Mt). Mineralization and associated hydrothermal alteration in the DMLZ is hosted and enclosed by a large stock (The Main Ertsberg Intrusion) that is barren on all sides and above the DMLZ. Late porphyry dikes that cut through the Main Ertsberg Intrusion are spatially associated with the center of the DMLZ hydrothermal system. DMLZ orebody hosted in diorite and skarn, both dominantly by vein style mineralization. Percentage Material Mined at DMLZ compare with current Reserves are diorite 46% (with 0.46% Cu; 0.56 ppm Au; and 0.83% EqCu); Skarn is 39% (with 1.4% Cu; 0.95 ppm Au; and 2.05% EqCu); Hornfels is 8% (with 0.84% Cu; 0.82 ppm Au; and 1.39% EqCu); and Marble 7 % possible mined waste. Correlation between Ertsberg intrusion, major structure, and vein style mineralization is important to determine characteristic orebody in DMLZ Mine. Generally Deep Mill Level Zone has 2 type of vein filling mineralization from both hosted (diorite and skarn), in diorite hosted the vein system filled by chalcopyrite-bornite-quartz and pyrite, in skarn hosted the vein filled by chalcopyrite-bornite-pyrite and magnetite without quartz. Based on orientation the stockwork vein at diorite hosted and shallow vein in skarn hosted was generally NW-SE trending and NE-SW trending with shallow-moderate dipping. Deep Mill Level Zone control by two main major faults, geologist founded and verified local structure between major structure with NW-SE trending and NE-SW trending with characteristics slickenside, shearing, gauge, water-gas channel, and some has been re-healed.

Keywords: copper-gold, DMLZ, skarn, structure

Procedia PDF Downloads 480
331 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City

Authors: Sultan Ahmad Azizi, Gaurang J. Joshi

Abstract:

Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.

Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport

Procedia PDF Downloads 235
330 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 237
329 A Conceptual Model of the Factors Affecting Saudi Citizens' Use of Social Media to Communicate with the Government

Authors: Reemiah Alotaibi, Muthu Ramachandran, Ah-Lian Kor, Amin Hosseinian-Far

Abstract:

In the past decade, developers of Web 2.0 technologies have shown increasing interest in the topic of e-government. There has been a rapid growth in social media technology because of its significant role in backing up some essential social needs. Its importance and power is derived from its capacity to support two-way communication. Governments are curious to get engaged in these websites, hoping to benefit from the new forms of communication and interaction offered by such technology. Greater participation by the public can be viewed as a chief indicator of effective government communication. Yet, the level of public participation in government 2.0 is not quite satisfactory. In general, it is still at the early stage in most developing countries, including Saudi Arabia. Although it is a fact that Saudi people are among the most active in using social media, the number of people who use social media to communicate with the public institutions is not high. Furthermore, most of the governmental organisations are not using social media tools to communicate with the public. They use these platforms to disseminate information. Our study focuses on the factors affecting citizens’ adoption of social media in Saudi Arabia. Our research question is: what are the factors affecting Saudi citizens’ use of social media to communicate with the government? To answer this research question, the research aims to validate the UTAUT model for examining social media tools from the citizen perspective. An amendment will be proposed to fit the adoption of social media platforms as a communication channel in government by using a developed conceptual model which integrates constructs from the UTAUT model and others external variables based on the literature review. The set of potential factors that affect these citizens' decisions to adopt social media to communicate with their government has been identified as perceived encouragement, trust and cultural influence. The connection between the above-mentioned constructs from the basis for the research hypothesis will be examined in the light of a quantitative methodology. Data collection will be performed through a survey targeting a number of Saudi citizens who are social media users. The data collected from the primary survey will later be analysed by using statistical methods. The outcomes of this research project are argued to have potential contributions to the fields of social media and e-Government adoption, both on the theoretical and practical levels. It is believed that this research project is the first of its type that attempts to identify the factors that affect citizens’ adoption of social media to communicate with the government. The importance of identifying these factors stems from the potential use of them to enhance the government’s implementation of social media and help in making more accurate decisions and strategies based on comprehending the most important factors that affect citizens’ decisions.

Keywords: social media, adoption, citizen, UTAUT model

Procedia PDF Downloads 391
328 Measuring the Visibility of the European Open Access Journals with Bibliometric Indicators

Authors: Maja Jokić, Andrea Mervar, Stjepan Mateljan

Abstract:

Peer review journals, as the main communication channel among researchers, fully achieve their objective if they are available to the global research community, which is accomplished through open access. In the EU countries, the idea of open access has spread over the years through various projects, initiatives, and strategic documents. Consequently, in this paper we want to analyze, using various bibliometric indicators, visibility, and significance of open access peer review journals compared to the conventional (non-open access) ones. We examine the sample of open access (OA) journals in 28 EU countries in addition to open access journals in three EU candidate countries (Bosnia and Herzegovina, FYR Macedonia and Serbia), all indexed by Scopus (N=1,522). These journals comprise 42% of the total number of OA journals indexed by Scopus. The distribution of OA journals in our sample according to the subject fields indicates that the largest share has OA journals in Health Sciences, 29% followed by Social Sciences and Physical Sciences with 25%, and 21% in Life Sciences. At the same time, the distribution according to countries (N=31) shows the dominance of EU15 countries with the share of 68.3% (N=1041) while post-socialist European countries (EU11 plus three candidate EU countries) have the share of 31.6% (N=481). Bibliometric indicators are derived from the SCImago Journal Ranking database. The analysis of OA journals according to their quartile scores (that reflect the relation between number of articles and their citations) shows that the largest number of OA journals from our sample was in the third quartile in 2015. For comparison, the majority of all academic journals indexed in Scopus from the countries in our sample were in the same year in the first quartile. The median of SJR indicator (SCImago Journal Rankings) for 2015 that measures the journal's prestige, amounted 0.297 for OA journals from the sample, while it was modestly lower for all OA journals, 0.284. The value of the same indicator for all journals indexed by Scopus (N=11,086) from our group of countries was 0.358, which is significantly different from the one for OA journals. Apart from the number of OA journals we also confirm significant differences between EU15 and post-socialist countries in bibliometric status of OA journals. The median SJR indicator for 2015 for EU15 countries was 0.394, while for post-socialist countries it amounted to 0.226. The changes in bibliometric indicators: quartile score, SJR (SCImago Journal Rankings), SNIP (Sources Normalised Impact by Paper) and IPP (Impact per Publication) of OA journals during 2012-2015 period, as well as H-index for the main four subject fields (Life Sciences, Physical Sciences, Social Sciences and Health Sciences) in the whole sample as well as in two main groups of European countries, show increasing trend of acceptance and visibility of OA journals within the academic community. More comprehensive insights into the visibility of OA journals could be reached by using additional qualitative research methods such as for example, interviews with researchers.

Keywords: bibliometric analysis, European countries, journal evaluation, open access journals

Procedia PDF Downloads 192
327 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 67
326 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

Authors: Kyomin Lee, Joohee Kim, Sangho Kang

Abstract:

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste

Procedia PDF Downloads 188
325 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 423
324 Field Synergy Analysis of Combustion Characteristics in the Afterburner of Solid Oxide Fuel Cell System

Authors: Shing-Cheng Chang, Cheng-Hao Yang, Wen-Sheng Chang, Chih-Chia Lin, Chun-Han Li

Abstract:

The solid oxide fuel cell (SOFC) is a promising green technology which can achieve a high electrical efficiency. Due to the high operating temperature of SOFC stack, the off-gases at high temperature from anode and cathode outlets are introduced into an afterburner to convert the chemical energy into thermal energy by combustion. The heat is recovered to preheat the fresh air and fuel gases before they pass through the stack during the SOFC power generation system operation. For an afterburner of the SOFC system, the temperature control with a good thermal uniformity is important. A burner with a well-designed geometry usually can achieve a satisfactory performance. To design an afterburner for an SOFC system, the computational fluid dynamics (CFD) simulation is adoptable. In this paper, the hydrogen combustion characteristics in an afterburner with simple geometry are studied by using CFD. The burner is constructed by a cylinder chamber with the configuration of a fuel gas inlet, an air inlet, and an exhaust outlet. The flow field and temperature distributions inside the afterburner under different fuel and air flow rates are analyzed. To improve the temperature uniformity of the afterburner during the SOFC system operation, the flow paths of anode/cathode off-gases are varied by changing the positions of fuels and air inlet channel to improve the heat and flow field synergy in the burner furnace. Because the air flow rate is much larger than the fuel gas, the flow structure and heat transfer in the afterburner is dominated by the air flow path. The present work studied the effects of fluid flow structures on the combustion characteristics of an SOFC afterburner by three simulation models with a cylindrical combustion chamber and a tapered outlet. All walls in the afterburner are assumed to be no-slip and adiabatic. In each case, two set of parameters are simulated to study the transport phenomena of hydrogen combustion. The equivalence ratios are in the range of 0.08 to 0.1. Finally, the pattern factor for the simulation cases is calculated to investigate the effect of gas inlet locations on the temperature uniformity of the SOFC afterburner. The results show that the temperature uniformity of the exhaust gas can be improved by simply adjusting the position of the gas inlet. The field synergy analysis indicates the design of the fluid flow paths should be in the way that can significantly contribute to the heat transfer, i.e. the field synergy angle should be as small as possible. In the study cases, the averaged synergy angle of the burner is about 85̊, 84̊, and 81̊ respectively.

Keywords: afterburner, combustion, field synergy, solid oxide fuel cell

Procedia PDF Downloads 113
323 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 15
322 Endoscopic Stenting of the Main Pancreatic Duct in Patients With Pancreatic Fluid Collections After Pancreas Transplantation

Authors: Y. Teterin, S. Suleymanova, I. Dmitriev, P. Yartcev

Abstract:

Introduction: One of the most common complications after pancreas transplantation are pancreatic fluid collections (PFCs), which are often complicated not only by infection and subsequent disfunction of the pancreatoduodenal graft (PDG), but also with a rather high mortality rate of recipients. Drainage is not always effective and often requires repeated open surgical interventions, which worsens the outcome of the surgery. Percutaneous drainage of PFCs combined with endoscopic stenting of the main pancreatic duct of the pancreatoduodenal graft (MPDPDG) showed high efficiency in the treatment of PFCs. Aims & Methods: From 01.01.2012 to 31.12.2021 at the Sklifosovsky Research Institute for Emergency Medicine were performed 64 transplantations of PDG. In 11 cases (17.2%), the early postoperative period was complicated by the formation of PFCs. Of these, 7 patients underwent percutaneous drainage of pancreonecrosis with high efficiency and did not required additional methods of treatment. In the remaining 4 patients, drainage was ineffective and was an indication for endoscopic stenting of the MPDPDG. They were the ones who made up the study group. Among them were 3 men and 1 woman. The mean age of the patients was 36,4 years.PFCs in these patients formed on days 1, 12, 18, and 47 after PDG transplantation. We used a gastroscope to stent the MPDPDG, due to anatomical features of the location of the duodenoduodenal anastomosis after PDG transplantation. Through the endoscope channel was performed selective catheterization of the MPDPDG, using a catheter and a guidewire, followed by its contrasting with a water-soluble contrast agent. Due to the extravasation of the contrast, was determined the localization of the defect in the PDG duct system. After that, a plastic pancreatic stent with a diameter of 7 Fr. and a length of 7 cm. was installed along guidewire. The stent was installed in such a way that its proximal edge completely covered the defect zone, and the distal one was determined in the intestinal lumen. Results: In all patients PDG pancreaticography revealed extravasation of a contrast in the area of the isthmus and body of the pancreas, which required stenting of the MPDPDG. In 1 (25%) case, the patient had a dislocation of the stent into the intestinal lumen (III degree according to Clavien-Dindo (2009)). This patient underwent repeated endoscopic stenting of the MPDPDG. On average 23 days after endoscopic stenting of the MPDPDG, the drainage tubes were removed and after approximately 40 days all patients were discharged in a satisfactory condition with follow-up endocrinologist and surgeon consultation. Pancreatic stents were removed after 6 months ± 7 days. Conclusion: Endoscopic stenting of the main pancreatic duct of the donor pancreas is by far the most highly effective and minimally invasive method in the treatment of PFCs after transplantation of the pancreatoduodenal complex.

Keywords: pancreas transplantation, endoscopy surgery, diabetes, stenting, main pancreatic duct

Procedia PDF Downloads 63
321 Assessing the Blood-Brain Barrier (BBB) Permeability in PEA-15 Mutant Cat Brain using Magnetization Transfer (MT) Effect at 7T

Authors: Sultan Z. Mahmud, Emily C. Graff, Adil Bashir

Abstract:

Phosphoprotein enriched in astrocytes 15 kDa (PEA-15) is a multifunctional adapter protein which is associated with the regulation of apoptotic cell death. Recently it has been discovered that PEA-15 is crucial in normal neurodevelopment of domestic cats, a gyrencephalic animal model, although the exact function of PEA-15 in neurodevelopment is unknown. This study investigates how PEA-15 affects the blood-brain barrier (BBB) permeability in cat brain, which can cause abnormalities in tissue metabolite and energy supplies. Severe polymicrogyria and microcephaly have been observed in cats with a loss of function PEA-15 mutation, affecting the normal neurodevelopment of the cat. This suggests that the vital role of PEA-15 in neurodevelopment is associated with gyrification. Neurodevelopment is a highly energy demanding process. The mammalian brain depends on glucose as its main energy source. PEA-15 plays a very important role in glucose uptake and utilization by interacting with phospholipase D1 (PLD1). Mitochondria also plays a critical role in bioenergetics and essential to supply adequate energy needed for neurodevelopment. Cerebral blood flow regulates adequate metabolite supply and recent findings also showed that blood plasma contains mitochondria as well. So the BBB can play a very important role in regulating metabolite and energy supply in the brain. In this study the blood-brain permeability in cat brain was measured using MRI magnetization transfer (MT) effect on the perfusion signal. Perfusion is the tissue mass normalized supply of blood to the capillary bed. Perfusion also accommodates the supply of oxygen and other metabolites to the tissue. A fraction of the arterial blood can diffuse to the tissue, which depends on the BBB permeability. This fraction is known as water extraction fraction (EF). MT is a process of saturating the macromolecules, which has an effect on the blood that has been diffused into the tissue while having minimal effect on intravascular blood water that has not been exchanged with the tissue. Measurement of perfusion signal with and without MT enables to estimate the microvascular blood flow, EF and permeability surface area product (PS) in the brain. All the experiments were performed with Siemens 7T Magnetom with 32 channel head coil. Three control cats and three PEA-15 mutant cats were used for the study. Average EF in white and gray matter was 0.9±0.1 and 0.86±0.15 respectively, perfusion in white and gray matter was 85±15 mL/100g/min and 97±20 mL/100g/min respectively, PS in white and gray matter was 201±25 mL/100g/min and 225±35 mL/100g/min respectively for control cats. For PEA-15 mutant cats, average EF in white and gray matter was 0.81±0.15 and 0.77±0.2 respectively, perfusion in white and gray matter was 140±25 mL/100g/min and 165±18 mL/100g/min respectively, PS in white and gray matter was 240±30 mL/100g/min and 259±21 mL/100g/min respectively. This results show that BBB is compromised in PEA-15 mutant cat brain, where EF is decreased and perfusion as well as PS are increased in the mutant cats compared to the control cats. This findings might further explain the function of PEA-15 in neurodevelopment.

Keywords: BBB, cat brain, magnetization transfer, PEA-15

Procedia PDF Downloads 108
320 Formulation and Evaluation of Curcumin-Zn (II) Microparticulate Drug Delivery System for Antimalarial Activity

Authors: M. R. Aher, R. B. Laware, G. S. Asane, B. S. Kuchekar

Abstract:

Objective: Studies have shown that a new combination therapy with Artemisinin derivatives and curcumin is unique, with potential advantages over known ACTs. In present study an attempt was made to prepare microparticulate drug delivery system of Curcumin-Zn complex and evaluate it in combination with artemether for antimalarial activity. Material and method: Curcumin Zn complex was prepared and encapsulated using sodium alginate. Microparticles thus obtained are further coated with various enteric polymers at different coating thickness to control the release. Microparticles are evaluated for encapsulation efficiency, drug loading and in vitro drug release. Roentgenographic Studies was conducted in rabbits with BaSO 4 tagged formulation. Optimized formulation was screened for antimalarial activity using P. berghei-infected mice survival test and % paracetemia inhibition, alone (three oral dose of 5mg/day) and in combination with arthemether (i.p. 500, 1000 and 1500µg). Curcumin-Zn(II) was estimated in serum after oral administration to rats by using spectroflurometry. Result: Microparticles coated with Cellulose acetate phthalate showed most satisfactory and controlled release with 479 min time for 60% drug release. X-ray images taken at different time intervals confirmed the retention of formulation in GI tract. Estimation of curcumin in serum by spectroflurometry showed that drug concentration is maintained in the blood for longer time with tmax of 6 hours. The survival time (40 days post treatment) of mice infected with P. berghei was compared to survival after treatment with either Curcumin-Zn(II) microparticles artemether combination, curcumin-Zn complex and artemether. Oral administration of Curcumin-Zn(II)-artemether prolonged the survival of P.berghei-infected mice. All the mice treated with Curcumin-Zn(II) microparticles (5mg/day) artemether (1000µg) survived for more than 40 days and recovered with no detectable parasitemia. Administration of Curcumin-Zn(II) artemether combination reduced the parasitemia in mice by more than 90% compared to that in control mice for the first 3 days after treatment. Conclusion: Antimalarial activity of the curcumin Zn-artemether combination was more pronounced than mono therapy. A single dose of 1000µg of artemether in curcumin-Zn combination gives complete protection in P. berghei-infected mice. This may reduce the chances of drug resistance in malaria management.

Keywords: formulation, microparticulate drug delivery, antimalarial, pharmaceutics

Procedia PDF Downloads 372
319 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi

Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur

Abstract:

Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.

Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi

Procedia PDF Downloads 221
318 Extreme Heat and Workforce Health in Southern Nevada

Authors: Erick R. Bandala, Kebret Kebede, Nicole Johnson, Rebecca Murray, Destiny Green, John Mejia, Polioptro Martinez-Austria

Abstract:

Summertemperature data from Clark County was collected and used to estimate two different heat-related indexes: the heat index (HI) and excess heat factor (EHF). These two indexes were used jointly with data of health-related deaths in Clark County to assess the effect of extreme heat on the exposed population. The trends of the heat indexes were then analyzed for the 2007-2016 decadeandthe correlation between heat wave episodes and the number of heat-related deaths in the area was estimated. The HI showed that this value has increased significantly in June, July, and August over the last ten years. The same trend was found for the EHF, which showed a clear increase in the severity and number of these events per year. The number of heat wave episodes increased from 1.4 per year during the 1980-2016 period to 1.66 per yearduring the 2007-2016 period. However, a different trend was found for heat-wave-event duration, which decreasedfrom an average of 20.4 days during the trans-decadal period (1980-2016) to 18.1 days during the most recent decade(2007-2016). The number of heat-related deaths was also found to increase from 2007 to 2016, with 2016 with the highest number of heat-related deaths. Both HI and the number of deaths showeda normal-like distribution for June, July, and August, with the peak values reached in late July and early August. The average maximum HI values better correlated with the number of deaths registered in Clark County than the EHF, probably because HI uses the maximum temperature and humidity in its estimation,whereas EHF uses the average medium temperature. However, it is worth testing the EHF of the study zone because it was reported to fit properly in the case of heat-related morbidity. For the overall period, 437 heat-related deaths were registered in Clark County, with 20% of the deaths occurring in June, 52% occurring in July, 18% occurring in August,and the remaining 10% occurring in the other months of the year. The most vulnerable subpopulation was people over 50 years old, for which 76% of the heat-related deaths were registered.Most of the cases were associated with heart disease preconditions. The second most vulnerable subpopulation was young adults (20-50), which accounted for 23% of the heat-related deaths. These deathswere associated with alcoholic/illegal drug intoxication.

Keywords: heat, health, hazards, workforce

Procedia PDF Downloads 79
317 Optimal Tetra-Allele Cross Designs Including Specific Combining Ability Effects

Authors: Mohd Harun, Cini Varghese, Eldho Varghese, Seema Jaggi

Abstract:

Hybridization crosses find a vital role in breeding experiments to evaluate the combining abilities of individual parental lines or crosses for creation of lines with desirable qualities. There are various ways of obtaining progenies and further studying the combining ability effects of the lines taken in a breeding programme. Some of the most common methods are diallel or two-way cross, triallel or three-way cross, tetra-allele or four-way cross. These techniques help the breeders to improve the quantitative traits which are of economical as well as nutritional importance in crops and animals. Amongst these methods, tetra-allele cross provides extra information in terms of the higher specific combining ability (sca) effects and the hybrids thus produced exhibit individual as well as population buffering mechanism because of the broad genetic base. Most of the common commercial hybrids in corn are either three-way or four-way cross hybrids. Tetra-allele cross came out as the most practical and acceptable scheme for the production of slaughter pigs having fast growth rate, good feed efficiency, and carcass quality. Tetra-allele crosses are mostly used for exploitation of heterosis in case of commercial silkworm production. Experimental designs involving tetra-allele crosses have been studied extensively in literature. Optimality of designs has also been considered as a researchable issue. In practical situations, it is advisable to include sca effects in the model as this information is needed by the breeder to improve economically and nutritionally important quantitative traits. Thus, a model that provides information regarding the specific traits by utilizing sca effects along with general combining ability (gca) effects may help the breeders to deal with the problem of various stresses. In this paper, a model for experimental designs involving tetra-allele crosses that incorporates both gca and sca has been defined. Optimality aspects of such designs have been discussed incorporating sca effects in the model. Orthogonality conditions have been derived for block designs ensuring estimation of contrasts among the gca effects, after eliminating the nuisance factors, independently from sca effects. User friendly SAS macro and web solution (webPTC) have been developed for the generation and analysis of such designs.

Keywords: general combining ability, optimality, specific combining ability, tetra-allele cross, webPTC

Procedia PDF Downloads 109
316 Ultra-Sensitive Point-Of-Care Detection of PSA Using an Enzyme- and Equipment-Free Microfluidic Platform

Authors: Ying Li, Rui Hu, Shizhen Chen, Xin Zhou, Yunhuang Yang

Abstract:

Prostate cancer is one of the leading causes of cancer-related death among men. Prostate-specific antigen (PSA), a specific product of prostatic epithelial cells, is an important indicator of prostate cancer. Though PSA is not a specific serum biomarker for the screening of prostate cancer, it is recognized as an indicator for prostate cancer recurrence and response to therapy for patient’s post-prostatectomy. Since radical prostatectomy eliminates the source of PSA production, serum PSA levels fall below 50 pg/mL, and may be below the detection limit of clinical immunoassays (current clinical immunoassay lower limit of detection is around 10 pg/mL). Many clinical studies have shown that intervention at low PSA levels was able to improve patient outcomes significantly. Therefore, ultra-sensitive and precise assays that can accurately quantify extremely low levels of PSA (below 1-10 pg/mL) will facilitate the assessment of patients for the possibility of early adjuvant or salvage treatment. Currently, the commercially available ultra-sensitive ELISA kit (not used clinically) can only reach a detection limit of 3-10 pg/mL. Other platforms developed by different research groups could achieve a detection limit as low as 0.33 pg/mL, but they relied on sophisticated instruments to get the final readout. Herein we report a microfluidic platform for point-of-care (POC) detection of PSA with a detection limit of 0.5 pg/mL and without the assistance of any equipment. This platform is based on a previously reported volumetric-bar-chart chip (V-Chip), which applies platinum nanoparticles (PtNPs) as the ELISA probe to convert the biomarker concentration to the volume of oxygen gas that further pushes the red ink to form a visualized bar-chart. The length of each bar is used to quantify the biomarker concentration of each sample. We devised a long reading channel V-Chip (LV-Chip) in this work to achieve a wide detection window. In addition, LV-Chip employed a unique enzyme-free ELISA probe that enriched PtNPs significantly and owned 500-fold enhanced catalytic ability over that of previous V-Chip, resulting in a significantly improved detection limit. LV-Chip is able to complete a PSA assay for five samples in 20 min. The device was applied to detect PSA in 50 patient serum samples, and the on-chip results demonstrated good correlation with conventional immunoassay. In addition, the PSA levels in finger-prick whole blood samples from healthy volunteers were successfully measured on the device. This completely stand-alone LV-Chip platform enables convenient POC testing for patient follow-up in the physician’s office and is also useful in resource-constrained settings.

Keywords: point-of-care detection, microfluidics, PSA, ultra-sensitive

Procedia PDF Downloads 90
315 Antioxidant Status in Synovial Fluid from Osteoarthritis Patients: A Pilot Study in Indian Demography

Authors: S. Koppikar, P. Kulkarni, D. Ingale , N. Wagh, S. Deshpande, A. Mahajan, A. Harsulkar

Abstract:

Crucial role of reactive oxygen species (ROS) in the progression Osteoarthritis (OA) pathogenesis has been endorsed several times though its exact mechanism remains unclear. Oxidative stress is known to instigate classical stress factors such as cytokines, chemokines and ROS, which hampers cartilage remodelling process and ultimately results in worsening the disease. Synovial fluid (SF) is a biological communicator between cartilage and synovium that accumulates redox and biochemical signalling mediators. The present work attempts to measure several oxidative stress markers in the synovial fluid obtained from knee OA patients with varying degree of disease severity. Thirty OA and five Meniscal-tear (MT) patients were graded using Kellgren-Lawrence scale and assessed for Nitric oxide (NO), Nitrate-Nitrite (NN), 2,2-diphenyl-1-picrylhydrazyl (DPPH), Ferric Reducing Antioxidant Potential (FRAP), Catalase (CAT), Superoxide dismutase (SOD) and Malondialdehyde (MDA) levels for comparison. Out of various oxidative markers studied, NO and SOD showed significant difference between moderate and severe OA (p= 0.007 and p= 0.08, respectively), whereas CAT demonstrated significant difference between MT and mild group (p= 0.07). Interestingly, NN revealed statistically positive correlation with OA severity (p= 0.001 and p= 0.003). MDA, a lipid peroxidation by-product was estimated maximum in early OA when compared to MT (p= 0.06). However, FRAP did not show any correlation with OA severity or MT control. NO is an essential bio-regulatory molecule essential for several physiological processes, and inflammatory conditions. However, due to its short life, exact estimation of NO becomes difficult. NO and its measurable stable products are still it is considered as one of the important biomarker of oxidative damage. Levels of NO and nitrite-nitrate in SF of patients with OA indicated its involvement in the disease progression. When SF groups were compared, a significant correlation among moderate, mild and MT groups was established. To summarize, present data illustrated higher levels of NO, SOD, CAT, DPPH and MDA in early OA in comparison with MT, as a control group. NN had emerged as a prognostic bio marker in knee OA patients, which may act as futuristic targets in OA treatment.

Keywords: antioxidant, knee osteoarthritis, oxidative stress, synovial fluid

Procedia PDF Downloads 451
314 A Qualitative Assessment of the Internal Communication of the College of Comunication: Basis for a Strategic Communication Plan

Authors: Edna T. Bernabe, Joshua Bilolo, Sheila Mae Artillero, Catlicia Joy Caseda, Liezel Once, Donne Ynah Grace Quirante

Abstract:

Internal communication is significant for an organization to function to its full extent. A strategic communication plan builds an organization’s structure and makes it more systematic. Information is a vital part of communication inside the organization as this lays every possible outcome—be it positive or negative. It is, therefore, imperative to assess the communication structure of a particular organization to secure a better and harmonious communication environment in any organization. Thus, this research was intended to identify the internal communication channels used in Polytechnic University of the Philippines-College of Communication (PUP-COC) as an organization, to identify the flow of information specifically in downward, upward, and horizontal communication, to assess the accuracy, consistency, and timeliness of its internal communication channels; and to come up with a proposed strategic communication plan of information dissemination to improve the existing communication flow in the college. The researchers formulated a framework from Input-Throughout-Output-Feedback-Goal of General System Theory and gathered data to assess the PUP-COC’s internal communication. The communication model links the objectives of the study to know the internal organization of the college. The qualitative approach and case study as the tradition of inquiry were used to gather deeper understanding of the internal organizational communication in PUP-COC, using Interview, as the primary methods for the study. This was supported with a quantitative data which were gathered through survey from the students of the college. The researchers interviewed 17 participants: the College dean, the 4 chairpersons of the college departments, the 11 faculty members and staff, and the acting Student Council president. An interview guide and a standardized questionnaire were formulated as instruments to generate the data. After a thorough analysis of the study, it was found out that two-way communication flow exists in PUP-COC. The type of communication channel the internal stakeholders use varies as to whom a particular person is communicating with. The members of the PUP-COC community also use different types of communication channels depending on the flow of communication being used. Moreover, the most common types of internal communication are the letters and memoranda for downward communication, while letters, text messages, and interpersonal communication are often used in upward communication. Various forms of social media have been found out to be of use in horizontal communication. Accuracy, consistency, and timeliness play a significant role in information dissemination within the college. However, some problems have also been found out in the communication system. The most common problem are the delay in the dissemination of memoranda and letters and the uneven distribution of information and instruction to faculty, staff, and students. This has led the researchers to formulate a strategic communication plan which aims to propose strategies that will solve the communication problems that are being experienced by the internal stakeholders.

Keywords: communication plan, downward communication, internal communication, upward communication

Procedia PDF Downloads 483