Search results for: edge betweenness centrality index
813 Role of Intralesional Tranexamic Acid in Comparison of Oral Tranexamic Acid in the Treatment of Melasma
Authors: Lubna Khondker
Abstract:
Background: Melasma is a common pigmentary dermatosis, manifested by hyperpigmented macules or patches on the face, commonly occurring in females due to an acquired disorder in the melanogenesis process. Although several treatments are currently used, it remains a great challenge due to recurrence and refractory nature. It was recently reported that tranexamic acid (TA-plasmin inhibitor) is an effective treatment for melasma. Objective: This study aims to compare the efficacy and side effects of intralesional injection of Tranexamic acid with oral Tranexamic acid in the treatment of melasma. Methods: A clinical trial was done in the Department of Dermatology and Venereology, Bangabandhu Sheikh Mujib Medical University, for a period of 4 years. A total of 100 patients with melasma who did not respond to topical therapy were included in the study as group A and group B. Group A Patients were administered intralesional injection (10 mg/ml) of Tranexamic acid( TA) weekly for 6 weeks, and group B patients were treated with oral tranexamic acid 250 mg 12 hourly for 12 weeks after taking informed consent. The severity and extent of pigmentation were assessed by the modified melasma area severity index (MASI). The response to treatment was assessed by MASI at 4 weeks, 8 weeks, and 12 weeks after stopping treatment. Results: The study showed the MASI scores at the baseline, 4 weeks, 8 weeks, and 12 weeks in group A were 18.23±1.22, 6.14±3.26, 3.21±2.14 and 2.11±2.01 respectively, and in group B, 17.87±1.12, 11.21±6.25, 6.57±4.26 and 6.41±4.17 respectively. The mean MASI significantly reduced in group A compared to group B in the 4th, 8th, and 12th weeks. The present study showed that among group A patients, 56% rated excellent (>75% reduction) in outcome, 32% good (50-75% reduction), 8% moderate (25-50% reduction) and only 4% (<25% reduction) was unsatisfactory and among group B patients, 14% rated excellent in outcome, 28% good, 36% moderate and 22% was unsatisfactory. Overall improvement in our study in group A was 96% and in group B 78%. Side effects were negligible, and all the patients tolerated the treatment well. Conclusion: Based on our results, intralesional Tranexamic acid (10 mg/ml) is more effective and safer than oral Tranexamic acid in the treatment of melasma.Keywords: intralesional tranexamic acid, melasma, oral tranexamic acid, MASI score
Procedia PDF Downloads 61812 Determining the Extent and Direction of Relief Transformations Caused by Ski Run Construction Using LIDAR Data
Authors: Joanna Fidelus-Orzechowska, Dominika Wronska-Walach, Jaroslaw Cebulski
Abstract:
Mountain areas are very often exposed to numerous transformations connected with the development of tourist infrastructure. In mountain areas in Poland ski tourism is very popular, so agricultural areas are often transformed into tourist areas. The construction of new ski runs can change the direction and rate of slope development. The main aim of this research was to determine geomorphological and hydrological changes within slopes caused by ski run constructions. The study was conducted in the Remiaszów catchment in the Inner Polish Carpathians (southern Poland). The mean elevation of the catchment is 859 m a.s.l. and the maximum is 946 m a.s.l. The surface area of the catchment is 1.16 km2, of which 16.8% is the area of the two studied ski runs. The studied ski runs were constructed in 2014 and 2015. In order to determine the relief transformations connected with new ski run construction high resolution LIDAR data was analyzed. The general relief changes in the studied catchment were determined on the basis of ALS (Airborne Laser Scanning ) data obtained before (2013) and after (2016) ski run construction. Based on the two sets of ALS data a digital elevation models of differences (DoDs) was created, which made it possible to determine the quantitative relief changes in the entire studied catchment. Additionally, cross and longitudinal profiles were calculated within slopes where new ski runs were built. Detailed data on relief changes within selected test surfaces was obtained based on TLS (Terrestrial Laser Scanning). Hydrological changes within the analyzed catchment were determined based on the convergence and divergence index. The study shows that the construction of the new ski runs caused significant geomorphological and hydrological changes in the entire studied catchment. However, the most important changes were identified within the ski slopes. After the construction of ski runs the entire catchment area lowered about 0.02 m. Hydrological changes in the studied catchment mainly led to the interruption of surface runoff pathways and changes in runoff direction and geometry.Keywords: hydrological changes, mountain areas, relief transformations, ski run construction
Procedia PDF Downloads 143811 The Association of Slope Failure and Lineament Density along the Ranau-Tambunan Road, Sabah, Malaysia
Authors: Norbert Simon, Rodeano Roslee, Abdul Ghani Rafek, Goh Thian Lai, Azimah Hussein, Lee Khai Ern
Abstract:
The 54 km stretch of Ranau-Tambunan (RTM) road in Sabah is subjected to slope failures almost every year. This study is focusing on identifying section of roads that are susceptible to failure based on temporal landslide density and lineament density analyses. In addition to the analyses, the rock slopes in several sections of the road were assessed using the geological strength index (GSI) technique. The analysis involved 148 landslides that were obtained in 1978, 1994, 2009 and 2011. The landslides were digitized as points and the point density was calculated based on every 1km2 of the road. The lineaments of the area was interpreted from Landsat 7 15m panchromatic band. The lineament density was later calculated based on every 1km2 of the area using similar technique with the slope failure density calculation. The landslide and lineament densities were classified into three different classes that indicate the level of susceptibility (low, moderate, high). Subsequently, the two density maps were overlap to produce the final susceptibility map. The combination of both high susceptibility classes from these maps signifies the high potential of slope failure in those locations in the future. The final susceptibility map indicates that there are 22 sections of the road that are highly susceptible. Seven rock slopes were assessed along the RTM road using the GSI technique. It was found from the assessment that rock slopes along this road are highly fractured, weathered and can be classified into fair to poor categories. The poor condition of the rock slope can be attributed to the high lineament density that presence in the study area. Six of the rock slopes are located in the high susceptibility zones. A detailed investigation on the 22 high susceptibility sections of the RTM road should be conducted due to their higher susceptibility to failure, in order to prevent untoward incident to road users in the future.Keywords: GSI, landslide, landslide density, landslide susceptibility, lineament density
Procedia PDF Downloads 397810 Development of a Model for Predicting Radiological Risks in Interventional Cardiology
Authors: Stefaan Carpentier, Aya Al Masri, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis, and ulceration to appear. In order to prevent these deterministic effects, a prediction of the peak skin-dose for the patient is important in order to improve the post-operative care to be given to the patient. The objective of this study is to estimate, before the intervention, the patient dose for ‘Chronic Total Occlusion (CTO)’ procedures by selecting relevant clinical indicators. Materials and methods: 103 procedures were performed in the ‘Interventional Cardiology (IC)’ department using a Siemens Artis Zee image intensifier that provides the Air Kerma of each IC exam. Peak Skin Dose (PSD) was measured for each procedure using radiochromic films. Patient parameters such as sex, age, weight, and height were recorded. The complexity index J-CTO score, specific to each intervention, was determined by the cardiologist. A correlation method applied to these indicators allowed to specify their influence on the dose. A predictive model of the dose was created using multiple linear regressions. Results: Out of 103 patients involved in the study, 5 were excluded for clinical reasons and 2 for placement of radiochromic films outside the exposure field. 96 2D-dose maps were finally used. The influencing factors having the highest correlation with the PSD are the patient's diameter and the J-CTO score. The predictive model is based on these parameters. The comparison between estimated and measured skin doses shows an average difference of 0.85 ± 0.55 Gy for doses of less than 6 Gy. The mean difference between air-Kerma and PSD is 1.66 Gy ± 1.16 Gy. Conclusion: Using our developed method, a first estimate of the dose to the skin of the patient is available before the start of the procedure, which helps the cardiologist in carrying out its intervention. This estimation is more accurate than that provided by the Air-Kerma.Keywords: chronic total occlusion procedures, clinical experimentation, interventional radiology, patient's peak skin dose
Procedia PDF Downloads 136809 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection
Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten
Abstract:
Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection
Procedia PDF Downloads 336808 Sea Surface Temperature and Climatic Variables as Drivers of North Pacific Albacore Tuna Thunnus Alalunga Time Series
Authors: Ashneel Ajay Singh, Naoki Suzuki, Kazumi Sakuramoto, Swastika Roshni, Paras Nath, Alok Kalla
Abstract:
Albacore tuna (Thunnus alalunga) is one of the commercially important species of tuna in the North Pacific region. Despite the long history of albacore fisheries in the Pacific, its ecological characteristics are not sufficiently understood. The effects of changing climate on numerous commercially and ecologically important fish species including albacore tuna have been documented over the past decades. The objective of this study was to explore and elucidate the relationship of environmental variables with the stock parameters of albacore tuna. The relationship of the North Pacific albacore tuna recruitment (R), spawning stock biomass (SSB) and recruits per spawning biomass (RPS) from 1970 to 2012 with the environmental factors of sea surface temperature (SST), Pacific decadal oscillation (PDO), El Niño southern oscillation (ENSO) and Pacific warm pool index (PWI) was construed. SST and PDO were used as independent variables with SSB to construct stock reproduction models for R and RPS as they showed most significant relationship with the dependent variables. ENSO and PWI were excluded due to collinearity effects with SST and PDO. Model selections were based on R2 values, Akaike Information Criterion (AIC) and significant parameter estimates at p<0.05. Models with single independent variables of SST, PDO, ENSO and PWI were also constructed to illuminate their individual effect on albacore R and RPS. From the results it can be said that SST and PDO resulted in the most significant models for reproducing North Pacific albacore tuna R and RPS time series. SST has the highest impact on albacore R and RPS when comparing models with single environmental variables. It is important for fishery managers and decision makers to incorporate the findings into their albacore tuna management plans for the North Pacific Oceanic region.Keywords: Albacore tuna, El Niño southern oscillation, Pacific decadal oscillation, sea surface temperature
Procedia PDF Downloads 231807 Gross Morphological Study on Heart of Yellow Bellied Sea Snake
Authors: Jonnalagadda Naveen, M. P. S. Tomar, Putluru Satish, Palanisamy Dharani
Abstract:
Present investigation was carried out on a single specimen of the heart of yellow-bellied sea snake, which accidentally came to the seashore with the fisherman’s net. After the death, these specimens was preserved in 10% neutral buffered formalin and observe for its morphology. The literature cited revealed that meager information was available on the anatomy of the heart of this species of snake thus present study was planned on the gross anatomy of the heart of yellow-bellied sea snake. The heart of yellow-bellied sea snake was located between 28-35th rib in an oblique direction in the pericardial sac. It was three chambered with the complete division of atria but the ventricular cavity was incompletely divided. The apex did not show any gubernaculum cordis. The sinus venosus was the common cavity for confluence of anterior and posterior vana cava and the jugular vein was opened with anterior vena cava. The opening of posterior vena cava was slit-like and it was guarded by membranous valves whereas no valve could be observed at the opening of anterior vana cava and the jugular vein. Both the caval veins ran along the right border of the heart. Pulmonary vein was single which later divided into two branches. The length-width index for the atria was 1.33 whereas it was 1.67 for the ventricle. The atrioventricular canal was situated slightly towards the left of the midline of the heart and was divided into a right cavum pulmonale and left cavum arteriosum of which the right one was slightly larger and longer than the left. The cavum venosum was present in between the cavum pulmonale and the cavum arteriosum. The Ventricle was elongated triangle muscular compartment with ventrally located apex. Internally the cavity of ventricle was divided into two partial chambers dorsally by a muscular ridge and ventrally by an incomplete inter ventricular septum.Keywords: aorta, atrium, heart, sea snake, sinus venosus, ventricle
Procedia PDF Downloads 206806 Monitoring Deforestation Using Remote Sensing And GIS
Authors: Tejaswi Agarwal, Amritansh Agarwal
Abstract:
Forest ecosystem plays very important role in the global carbon cycle. It stores about 80% of all above ground and 40% of all below ground terrestrial organic carbon. There is much interest in the extent of tropical forests and their rates of deforestation for two reasons: greenhouse gas contributions and the impact of profoundly negative biodiversity. Deforestation has many ecological, social and economic consequences, one of which is the loss of biological diversity. The rapid deployment of remote sensing (RS) satellites and development of RS analysis techniques in the past three decades have provided a reliable, effective, and practical way to characterize terrestrial ecosystem properties. Global estimates of tropical deforestation vary widely and range from 50,000 to 170,000km2 /yr Recent FAO tropical deforestation estimates for 1990–1995 cite 116,756km2 / yr globally. Remote Sensing can prove to be a very useful tool in monitoring of forests and associated deforestation to a sufficient level of accuracy without the need of physically surveying the forest areas as many of them are physically inaccessible. The methodology for the assessment of forest cover using digital image processing (ERDAS) has been followed. The satellite data for the study was procured from Indian institute of remote Sensing (IIRS), Dehradoon in the digital format. While procuring the satellite data, care was taken to ensure that the data was cloud free and did not belong to dry and leafless season. The Normalized Difference Vegetation Index (NDVI) has been used as a numerical indicator of the reduction in ground biomass. NDVI = (near I.R - Red)/ (near I.R + Red). After calculating the NDVI variations and associated mean, we have analysed the change in ground biomass. Through this paper, we have tried to indicate the rate of deforestation over a given period of time by comparing the forest cover at different time intervals. With the help of remote sensing and GIS techniques, it is clearly shown that the total forest cover is continuously degrading and transforming into various land use/land cover category.Keywords: remote sensing, deforestation, supervised classification, NDVI, change detection
Procedia PDF Downloads 1203805 Genetic Variation in CYP4F2 and VKORC1: Pharmacogenomics Implications for Response to Warfarin
Authors: Zinhle Cindi, Collet Dandara, Mpiko Ntsekhe, Edson Makambwa, Miguel Larceda
Abstract:
Background: Warfarin is the most commonly used drug in the management of thromboembolic disease. However, there is a huge variability in the time, number of doses or starting doses for patients to achieve the required international normalised ratio (INR) which is compounded by a narrow therapeutic index. Many genetic-association studies have reported on European and Asian populations which have led to the designing of specific algorithms that are now being used to assist in warfarin dosing. However, very few or no studies have looked at the pharmacogenetics of warfarin in African populations, yet, huge differences in dosage requirements to reach the same INR have been observed. Objective: We set out to investigate the distribution of 3 SNPs CYP4F2 c.1347C > T, VKORC1 g.-1639G > A and VKORC1 c.1173C > T among South African Mixed Ancestry (MA) and Black African patients. Methods: DNA was extracted from 383 participants and subsequently genotyped using PCR/RFLP for the CYP4F2 c.1347 (V433M) (rs2108622), VKORC1 g.-1639 (rs9923231) and VKORC1 c.1173 (rs9934438) SNPs. Results: Comparing the Black and MA groups, significant differences were observed in the distribution of the following genotypes; CYP4F2 c.1347C/T (23% vs. 39% p=0.03). All VKORC1 g.-1639G > A genotypes (p < 0.006) and all VKORC1 c.1173C > T genotypes (p < 0.007). Conclusion: CYP4F2 c.1347T (V433M) reduces CYP4F2 protein levels and therefore expected to affect the amount of warfarin needed to block vitamin k recycling. The VKORC1 g-1639A variant alters transcriptional regulation therefore affecting the function of vitamin k epoxide reductase in vitamin k production. The VKORC1 c.1173T variant reduces the enzyme activity of VKORC1 consequently enhancing the effectiveness of warfarin. These are preliminary results; more genetic characterization is required to understand all the genetic determinants affecting how patients respond to warfarin.Keywords: algorithms, pharmacogenetics, thromboembolic disease, warfarin
Procedia PDF Downloads 257804 Coordination of Traffic Signals on Arterial Streets in Duhok City
Authors: Dilshad Ali Mohammed, Ziyad Nayef Shamsulddin Aldoski, Millet Salim Mohammed
Abstract:
The increase in levels of traffic congestion along urban signalized arterials needs efficient traffic management. The application of traffic signal coordination can improve the traffic operation and safety for a series of signalized intersection along the arterials. The objective of this study is to evaluate the benefits achievable through actuated traffic signal coordination and make a comparison in control delay against the same signalized intersection in case of being isolated. To accomplish this purpose, a series of eight signalized intersections located on two major arterials in Duhok City was chosen for conducting the study. Traffic data (traffic volumes, link and approach speeds, and passenger car equivalent) were collected at peak hours. Various methods had been used for collecting data such as video recording technique, moving vehicle method and manual methods. Geometric and signalization data were also collected for the purpose of the study. The coupling index had been calculated to check the coordination attainability, and then time space diagrams were constructed representing one-way coordination for the intersections on Barzani and Zakho Streets, and others represented two-way coordination for the intersections on Zakho Street with accepted progression bandwidth efficiency. The results of this study show great progression bandwidth of 54 seconds for east direction coordination and 17 seconds for west direction coordination on Barzani Street under suggested controlled speed of 60 kph agreeable with the present data. For Zakho Street, the progression bandwidth is 19 seconds for east direction coordination and 18 seconds for west direction coordination under suggested controlled speed of 40 kph. The results show that traffic signal coordination had led to high reduction in intersection control delays on both arterials.Keywords: bandwidth, congestion, coordination, traffic, signals, streets
Procedia PDF Downloads 306803 Analysis of Human Toxicity Potential of Major Building Material Production Stage Using Life Cycle Assessment
Authors: Rakhyun Kim, Sungho Tae
Abstract:
Global environmental issues such as abnormal weathers due to global warming, resource depletion, and ecosystem distortions have been escalating due to rapid increase of population growth, and expansion of industrial and economic development. Accordingly, initiatives have been implemented by many countries to protect the environment through indirect regulation methods such as Environmental Product Declaration (EPD), in addition to direct regulations such as various emission standards. Following this trend, life cycle assessment (LCA) techniques that provide quantitative environmental information, such as Human Toxicity Potential (HTP), for buildings are being developed in the construction industry. However, at present, the studies on the environmental database of building materials are not sufficient to provide this support adequately. The purpose of this study is to analysis human toxicity potential of major building material production stage using life cycle assessment. For this purpose, the theoretical consideration of the life cycle assessment and environmental impact category was performed and the direction of the study was set up. That is, the major material in the global warming potential view was drawn against the building and life cycle inventory database was selected. The classification was performed about 17 kinds of substance and impact index, such as human toxicity potential, that it specifies in CML2001. The environmental impact of analysis human toxicity potential for the building material production stage was calculated through the characterization. Meanwhile, the environmental impact of building material in the same category was analyze based on the characterization impact which was calculated in this study. In this study, establishment of environmental impact coefficients of major building material by complying with ISO 14040. Through this, it is believed to effectively support the decisions of stakeholders to improve the environmental performance of buildings and provide a basis for voluntary participation of architects in environment consideration activities.Keywords: human toxicity potential, major building material, life cycle assessment, production stage
Procedia PDF Downloads 139802 Screening of Lactic Acid Bacteria Isolated from Traditional Fermented Products: Potential Probiotic Bacteria with Antimicrobial and Cytotoxic Activities
Authors: Genesis Julyus T. Agcaoili, Esperanza C. Cabrera
Abstract:
Thirty (30) isolates of lactic acid bacteria (LAB) from traditionally-prepared fermented products specifically fermented soy-bean paste, fermented mustard and fermented rice-fish mixture were studied for their in vitro antimicrobial and cytotoxic activities. Seventeen (17) isolates were identified as Lactobacillus plantarum, while 13 isolates were identified as Enterococcus spp using 16s rDNA sequences. Disc diffusion method was used to determine the antibacterial activity of LAB against Staphylococcus aureus (ATCC 25923) and Escherichia coli (ATCC 25922), while the modified agar overlay method was used to determine the antifungal activity of LAB isolates on the yeast Candida albicans, and the dermatophytes Microsporum gypseum, Trichophyton rubrum and Epidermophyton floccosum. The filter-sterilized LAB supernatants were evaluated for their cytotoxicity to mammalian colon cancer cell lines (HT-29 and HCT116) and normal human dermal fibrolasts (HDFn) using resazurin assay (PrestoBlueTM). Colchicine was the positive control. No antimicrobial activity was observed against the bacterial test organisms and the yeast Candida albicans. On the other hand, all of the tested LAB strains were fungicidal for all the test dermatophytes. Cytotoxicity index profiles of the supernatants of the 15 randomly picked LABs and negative control (brain heart infussion broth) suggest nontoxicity to the cells when compared to colchicine, whereas all LAB supernatants were found to be cytotoxic to HT-29 and HCT116 colon cancer cell lines. Results provide strong support for the role of the lactic acid bacteria studied in antimicrobial treatment and anticancer therapy.Keywords: antimicrobial, fermented products, fungicidal activity, lactic acid bacteria, probiotics
Procedia PDF Downloads 237801 Enhancement of Density-Based Spatial Clustering Algorithm with Noise for Fire Risk Assessment and Warning in Metro Manila
Authors: Pinky Mae O. De Leon, Franchezka S. P. Flores
Abstract:
This study focuses on applying an enhanced density-based spatial clustering algorithm with noise for fire risk assessments and warnings in Metro Manila. Unlike other clustering algorithms, DBSCAN is known for its ability to identify arbitrary-shaped clusters and its resistance to noise. However, its performance diminishes when handling high dimensional data, wherein it can read the noise points as relevant data points. Also, the algorithm is dependent on the parameters (eps & minPts) set by the user; choosing the wrong parameters can greatly affect its clustering result. To overcome these challenges, the study proposes three key enhancements: first is to utilize multiple MinHash and locality-sensitive hashing to decrease the dimensionality of the data set, second is to implement Jaccard Similarity before applying the parameter Epsilon to ensure that only similar data points are considered neighbors, and third is to use the concept of Jaccard Neighborhood along with the parameter MinPts to improve in classifying core points and identifying noise in the data set. The results show that the modified DBSCAN algorithm outperformed three other clustering methods, achieving fewer outliers, which facilitated a clearer identification of fire-prone areas, high Silhouette score, indicating well-separated clusters that distinctly identify areas with potential fire hazards and exceptionally achieved a low Davies-Bouldin Index and a high Calinski-Harabasz score, highlighting its ability to form compact and well-defined clusters, making it an effective tool for assessing fire hazard zones. This study is intended for assessing areas in Metro Manila that are most prone to fire risk.Keywords: DBSCAN, clustering, Jaccard similarity, MinHash LSH, fires
Procedia PDF Downloads 0800 Effect of Sustainability Accounting Disclosure on Financial Performance of Listed Brewery Firms in Nigeria
Authors: Patricia Chinyere Oranefo
Abstract:
This study examined the effect of sustainability accounting disclosure on financial performance of listed Brewery firms in Nigeria. The dearth of empirical evidence and literature on “governance disclosure” as one of the explanatory variables of sustainability accounting reporting were the major motivation for this study. The main objective was to ascertain the effect of sustainability accounting disclosure on financial performance of listed Brewery firms in Nigeria. An ex–post facto research design approach was adopted for the study. The population of this study comprises of five (5) Brewery firms quoted on the floor of the Nigeria exchange group (NSX) and the sample size of four (4) listed firms was drawn using purposive sampling method. Secondary data were carefully sourced from the financial statement/annual reports and sustainability reports from 2012 to 2021 of the Brewery firms quoted on the Nigeria exchange group (NSX). Panel regression analysis by aid of E-views 10.0 software was used to test for statistical significance of the effect of sustainability accounting disclosure on financial performance of listed Brewery firms in Nigeria. The results showed that economic sustainability disclosure indexes do not significantly affect return on asset of listed Brewery firms in Nigeria. The findings further revealed that environmental sustainability disclosure indexes do not significantly affect return on equity of listed Brewery firms in Nigeria. More so, results showed that Social Sustainability disclosure indexes significantly affect Net Profit Margin of listed Brewery firms in Nigeria. Finally, the result established also that governance sustainability disclosure indexes do not significantly affect Earnings per share of listed Brewery firms in Nigeria. Consequent upon the findings, this study recommended among others; that managers of Brewers in Nigeria should improve and sustain full disclosure practices on economic, environmental, social and governance disclosures following the guidelines of the Global Reporting Index (GRI) as they are capable of exerting significant effect on financial performance of firms in Nigeria.Keywords: sustainability, accounting, disclosure, financial performance
Procedia PDF Downloads 59799 NDVI as a Measure of Change in Forest Biomass
Authors: Amritansh Agarwal, Tejaswi Agarwal
Abstract:
Forest ecosystem plays very important role in the global carbon cycle. It stores about 80% of all above ground and 40% of all below ground terrestrial organic carbon. There is much interest in the extent of tropical forests and their rates of deforestation for two reasons: greenhouse gas contributions and the impact of profoundly negative biodiversity. Deforestation has many ecological, social and economic consequences, one of which is the loss of biological diversity. The rapid deployment of remote sensing (RS) satellites and development of RS analysis techniques in the past three decades have provided a reliable, effective, and practical way to characterize terrestrial ecosystem properties. Global estimates of tropical deforestation vary widely and range from 50,000 to 170,000 km2 /yr Recent FAO tropical deforestation estimates for 1990–1995 cite 116,756km2 / yr globally. Remote Sensing can prove to be a very useful tool in monitoring of forests and associated deforestation to a sufficient level of accuracy without the need of physically surveying the forest areas as many of them are physically inaccessible. The methodology for the assessment of forest cover using digital image processing (ERDAS) has been followed. The satellite data for the study was procured from USGS website in the digital format. While procuring the satellite data, care was taken to ensure that the data was cloud and aerosol free by making using of FLAASH atmospheric correction technique. The Normalized Difference Vegetation Index (NDVI) has been used as a numerical indicator of the reduction in ground biomass. NDVI = (near I.R - Red)/ (near I.R + Red). After calculating the NDVI variations and associated mean we have analysed the change in ground biomass. Through this paper we have tried to indicate the rate of deforestation over a given period of time by comparing the forest cover at different time intervals. With the help of remote sensing and GIS techniques it is clearly shows that the total forest cover is continuously degrading and transforming into various land use/land cover category.Keywords: remote sensing, deforestation, supervised classification, NDVI change detection
Procedia PDF Downloads 402798 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Annulus Pulley
Authors: Bijit Kalita, K. V. N. Surendra
Abstract:
The pulley works under both compressive loading due to contacting belt in tension and central torque due to cause rotation. In a power transmission system, the belt pulley assemblies offer a contact problem in the form of two mating cylindrical parts. In this work, we modeled a pulley as a heavy two-dimensional circular disk. Stress analysis due to contact loading in the pulley mechanism is performed. Finite element analysis (FEA) is conducted for a pulley to investigate the stresses experienced on its inner and outer periphery. In most of the heavy-duty applications, most frequently used mechanisms to transmit power in applications such as automotive engines, industrial machines, etc. is Belt Drive. Usually, very heavy circular disks are used as pulleys. A pulley could be entitled as a drum and may have a groove between two flanges around the circumference. A rope, belt, cable or chain can be the driving element of a pulley system that runs over the pulley inside the groove. A pulley is experienced by normal and shear tractions on its contact region in the process of motion transmission. The region may be belt-pulley contact surface or pulley-shaft contact surface. In 1895, Hertz solved the elastic contact problem for point contact and line contact of an ideal smooth object. Afterward, this hypothesis is generally utilized for computing the actual contact zone. Detailed stress analysis in such contact region of such pulleys is quite necessary to prevent early failure. In this paper, the results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. Based on the literature on contact stress problem induced in the wide field of applications, generated stress distribution on the shaft-pulley and belt-pulley interfaces due to the application of high-tension and torque was evaluated in this study using FEA concepts. Finally, the results obtained from ANSYS (APDL) were compared with the Hertzian contact theory. The study is mainly focused on the fatigue life estimation of a rotating part as a component of an engine assembly using the most famous Paris equation. Digital Image Correlation (DIC) analyses have been performed using the open-source software. From the displacement computed using the images acquired at a minimum and maximum force, displacement field amplitude is computed. From these fields, the crack path is defined and stress intensity factors and crack tip position are extracted. A non-linear least-squares projection is used for the purpose of the estimation of fatigue crack growth. Further study will be extended for the various application of rotating machinery such as rotating flywheel disk, jet engine, compressor disk, roller disk cutter etc., where Stress Intensity Factor (SIF) calculation plays a significant role on the accuracy and reliability of a safe design. Additionally, this study will be progressed to predict crack propagation in the pulley using maximum tangential stress (MTS) criteria for mixed mode fracture.Keywords: crack-tip deformations, contact stress, stress concentration, stress intensity factor
Procedia PDF Downloads 124797 Analysis of the Evolution of Landscape Spatial Patterns in Banan District, Chongqing, China
Authors: Wenyang Wan
Abstract:
The study of urban land use and landscape pattern is the current hotspot in the fields of planning and design, ecology, etc., which is of great significance for the construction of the overall humanistic ecosystem of the city and optimization of the urban spatial structure. Banan District, as the main part of the eastern eco-city planning of Chongqing Municipality, is a high ground for highlighting the ecological characteristics of Chongqing, realizing effective transformation of ecological value, and promoting the integrated development of urban and rural areas. The analytical methods of land use transfer matrix (GIS) and landscape pattern index (Fragstats) were used to study the characteristics and laws of the evolution of land use landscape pattern in Banan District from 2000 to 2020, which provide some reference value for Banan District to alleviate the ecological contradiction of landscape. The results of the study show that ① Banan District is rich in land use types, of which the area of cultivated land will still account for 57.15% of the total area of the landscape until 2020, accounting for an absolute advantage in land use structure of Banan District; ② From 2000 to 2020, land use conversion in Banan District is characterized as Cropland > woodland > grassland > shrubland > built-up land > water bodies > wetlands, with cropland converted to built-up land being the largest; ③ From 2000 to 2020, the landscape elements of Banan District were distributed in a balanced way, and the landscape types were rich and diversified, but due to the influence of human interference, it also presented the characteristics that the shape of the landscape elements tended to be irregular, and the dominant patches were distributed in a scattered manner, and the patches had poor connectivity. It is recommended that in future regional ecological construction, the layout should be rationally optimized, the relationship between landscape components should be coordinated, the connectivity between landscape patches should be strengthened, and the degree of landscape fragmentation should be reduced.Keywords: land use transfer, landscape pattern evolution, GIS and Fragstats, Banan district
Procedia PDF Downloads 72796 The Role Previous Cytomegalovirus Infection in Subsequent Lymphoma Develompment
Authors: Amalia Ardeljan, Lexi Frankel, Divesh Manjani, Gabriela Santizo, Maximillian Guerra, Omar Rashid
Abstract:
Introduction: Cytomegalovirus (CMV) infection is a widespread infection affecting between 60-70% of people in industrialized countries. CMV has been previously correlated with a higher incidence of Hodgkin Lymphoma compared to noninfected persons. Research regarding prior CMV infection and subsequent lymphoma development is still controversial. With limited evidence, further research is needed in order to understand the relationship between previous CMV infection and subsequent lymphoma development. This study assessed the effect of CMV infection and the incidence of lymphoma afterward. Methods: A retrospective cohort study (2010-2019) was conducted through a Health Insurance Portability and Accountability Act (HIPAA) compliant national database and conducted using International Classification of Disease (ICD) 9th,10th codes, and Current Procedural Terminology (CPT) codes. These were used to identify lymphoma diagnosis in a previously CMV infected population. Patients were matched for age range and Charlson Comorbidity Index (CCI). A chi-squared test was used to assess statistical significance. Results: A total number of 14,303 patients was obtained in the CMV infected group as well as in the control population (matched by age range and CCI score). Subsequent lymphoma development was seen at a rate of 11.44% (1,637) in the CMV group and 5.74% (822) in the control group, respectively. The difference was statistically significant by p= 2.2x10-16, odds ratio = 2.696 (95% CI 2.483- 2.927). In an attempt to stratify the population by antiviral medication exposure, the outcomes were limited by the decreased number of members exposed to antiviral medication in the control population. Conclusion: This study shows a statistically significant correlation between prior CMV infection and an increased incidence of lymphoma afterward. Further exploration is needed to identify the potential carcinogenic mechanism of CMV and whether the results are attributed to a confounding bias.Keywords: cytomegalovirus, lymphoma, cancer, microbiology
Procedia PDF Downloads 219795 Companies’ Internationalization: Multi-Criteria-Based Prioritization Using Fuzzy Logic
Authors: Jorge Anibal Restrepo Morales, Sonia Martín Gómez
Abstract:
A model based on a logical framework was developed to quantify SMEs' internationalization capacity. To do so, linguistic variables, such as human talent, infrastructure, innovation strategies, FTAs, marketing strategies, finance, etc. were integrated. It is argued that a company’s management of international markets depends on internal factors, especially capabilities and resources available. This study considers internal factors as the biggest business challenge because they force companies to develop an adequate set of capabilities. At this stage, importance and strategic relevance have to be defined in order to build competitive advantages. A fuzzy inference system is proposed to model the resources, skills, and capabilities that determine the success of internationalization. Data: 157 linguistic variables were used. These variables were defined by international trade entrepreneurs, experts, consultants, and researchers. Using expert judgment, the variables were condensed into18 factors that explain SMEs’ export capacity. The proposed model is applied by means of a case study of the textile and clothing cluster in Medellin, Colombia. In the model implementation, a general index of 28.2 was obtained for internationalization capabilities. The result confirms that the sector’s current capabilities and resources are not sufficient for a successful integration into the international market. The model specifies the factors and variables, which need to be worked on in order to improve export capability. In the case of textile companies, the lack of a continuous recording of information stands out. Likewise, there are very few studies directed towards developing long-term plans, and., there is little consistency in exports criteria. This method emerges as an innovative management tool linked to internal organizational spheres and their different abilities.Keywords: business strategy, exports, internationalization, fuzzy set methods
Procedia PDF Downloads 294794 SPARK: An Open-Source Knowledge Discovery Platform That Leverages Non-Relational Databases and Massively Parallel Computational Power for Heterogeneous Genomic Datasets
Authors: Thilina Ranaweera, Enes Makalic, John L. Hopper, Adrian Bickerstaffe
Abstract:
Data are the primary asset of biomedical researchers, and the engine for both discovery and research translation. As the volume and complexity of research datasets increase, especially with new technologies such as large single nucleotide polymorphism (SNP) chips, so too does the requirement for software to manage, process and analyze the data. Researchers often need to execute complicated queries and conduct complex analyzes of large-scale datasets. Existing tools to analyze such data, and other types of high-dimensional data, unfortunately suffer from one or more major problems. They typically require a high level of computing expertise, are too simplistic (i.e., do not fit realistic models that allow for complex interactions), are limited by computing power, do not exploit the computing power of large-scale parallel architectures (e.g. supercomputers, GPU clusters etc.), or are limited in the types of analysis available, compounded by the fact that integrating new analysis methods is not straightforward. Solutions to these problems, such as those developed and implemented on parallel architectures, are currently available to only a relatively small portion of medical researchers with access and know-how. The past decade has seen a rapid expansion of data management systems for the medical domain. Much attention has been given to systems that manage phenotype datasets generated by medical studies. The introduction of heterogeneous genomic data for research subjects that reside in these systems has highlighted the need for substantial improvements in software architecture. To address this problem, we have developed SPARK, an enabling and translational system for medical research, leveraging existing high performance computing resources, and analysis techniques currently available or being developed. It builds these into The Ark, an open-source web-based system designed to manage medical data. SPARK provides a next-generation biomedical data management solution that is based upon a novel Micro-Service architecture and Big Data technologies. The system serves to demonstrate the applicability of Micro-Service architectures for the development of high performance computing applications. When applied to high-dimensional medical datasets such as genomic data, relational data management approaches with normalized data structures suffer from unfeasibly high execution times for basic operations such as insert (i.e. importing a GWAS dataset) and the queries that are typical of the genomics research domain. SPARK resolves these problems by incorporating non-relational NoSQL databases that have been driven by the emergence of Big Data. SPARK provides researchers across the world with user-friendly access to state-of-the-art data management and analysis tools while eliminating the need for high-level informatics and programming skills. The system will benefit health and medical research by eliminating the burden of large-scale data management, querying, cleaning, and analysis. SPARK represents a major advancement in genome research technologies, vastly reducing the burden of working with genomic datasets, and enabling cutting edge analysis approaches that have previously been out of reach for many medical researchers.Keywords: biomedical research, genomics, information systems, software
Procedia PDF Downloads 270793 Buy-and-Hold versus Alternative Strategies: A Comparison of Market-Timing Techniques
Authors: Jonathan J. Burson
Abstract:
With the rise of virtually costless, mobile-based trading platforms, stock market trading activity has increased significantly over the past decade, particularly for the millennial generation. This increased stock market attention, combined with the recent market turmoil due to the economic upset caused by COVID-19, make the topics of market-timing and forecasting particularly relevant. While the overall stock market saw an unprecedented, historically-long bull market from March 2009 to February 2020, the end of that bull market reignited a search by investors for a way to reduce risk and increase return. Similar searches for outperformance occurred in the early, and late 2000’s as the Dotcom bubble burst and the Great Recession led to years of negative returns for mean-variance, index investors. Extensive research has been conducted on fundamental analysis, technical analysis, macroeconomic indicators, microeconomic indicators, and other techniques—all using different methodologies and investment periods—in pursuit of higher returns with lower risk. The enormous variety of timeframes, data, and methodologies used by the diverse forecasting methods makes it difficult to compare the outcome of each method directly to other methods. This paper establishes a process to evaluate the market-timing methods in an apples-to-apples manner based on simplicity, performance, and feasibility. Preliminary findings show that certain technical analysis models provide a higher return with lower risk when compared to the buy-and-hold method and to other market-timing strategies. Furthermore, technical analysis models tend to be easier for individual investors both in terms of acquiring the data and in analyzing it, making technical analysis-based market-timing methods the preferred choice for retail investors.Keywords: buy-and-hold, forecast, market-timing, probit, technical analysis
Procedia PDF Downloads 97792 Analysis of ZBTB17 Gene rs10927875 Polymorphism in Relation to Dilated Cardiomyopathy in Slovak Population
Authors: I. Boroňová, J. Bernasovská, J. Kmec, E. Petrejčíková
Abstract:
Dilated cardiomyopathy (DCM) is a primary myocardial disease, it is characterized by progressive systolic dysfunction due to cardiac chamber dilatation and inefficient myocardial contractility with estimated prevalence of 37 in 100 000 people. It is the most frequent cause of heart failure and cardiac transplantation in young adults. About one-third of all patients have a suspected familial disease indicating a genetic basis of DCM. Many candidate gene studies in humans have tested the association of single nucleotide polymorphisms (SNPs) in various genes coding for proteins with a known cardiovascular function. In our study we present the results of ZBTB17 gene rs10927875 polymorphism genotyping in relation to dilated cardiomyopathy in Slovak population. The study included 78 individuals, 39 patients with DCM and 39 healthy control persons. The mean age of patients with DCM was 50.7±11.5 years; the mean age of individuals in control group was 51.3±9.8 years. Risk factors detected at baseline in each group included age, sex, body mass index, smoking status, diabetes and blood pressure. Genomic DNA was extracted from leukocytes by a standard methodology and screened for rs10927875 polymorphism in intron of ZBTB17 gene using Real-time PCR method (Step One Applied Biosystems). The distribution of investigated genotypes for rs10927875 polymorphism in the group of patients with DCM was as follows: CC (89.74%), CT (10.26%), TT (0%), and the distribution in the control group: CC (92.31%), CT (5.13%), and TT (2.56%). Using the chi-square (χ2) test we compared genotype and allele frequencies between patients and controls. There was no difference in genotype or allele frequencies in ZBTB17 gene rs10927875 polymorphism between patients and control group (χ2=3.028, p=0.220; χ2=0.264, p=0.608). Our results represent an initial study, it can be considered as preliminary and first of its kind in Slovak population. Further studies of ZBTB17 gene polymorphisms of more numerous files and additional functional investigations are needed to fully understand the role of genetic associations.Keywords: dilated cardiomyopathy, SNP polymorphism, ZBTB17 gene, bioscience
Procedia PDF Downloads 383791 Laboratory Investigations on the Utilization of Recycled Construction Aggregates in Asphalt Mixtures
Authors: Farzaneh Tahmoorian, Bijan Samali, John Yeaman
Abstract:
Road networks are increasingly expanding all over the world. The construction and maintenance of the road pavements require large amounts of aggregates. Considerable usage of various natural aggregates for constructing roads as well as the increasing rate at which solid waste is generated have attracted the attention of many researchers in the pavement industry to investigate the feasibility of the application of some of the waste materials as alternative materials in pavement construction. Among various waste materials, construction and demolition wastes, including Recycled Construction Aggregate (RCA) constitute a major part of the municipal solid wastes in Australia. Creating opportunities for the application of RCA in civil and geotechnical engineering applications is an efficient way to increase the market value of RCA. However, in spite of such promising potentials, insufficient and inconclusive data and information on the engineering properties of RCA had limited the reliability and design specifications of RCA to date. In light of this, this paper, as a first step of a comprehensive research, aims to investigate the feasibility of the application of RCA obtained from construction and demolition wastes for the replacement of part of coarse aggregates in asphalt mixture. As the suitability of aggregates for using in asphalt mixtures is determined based on the aggregate characteristics, including physical and mechanical properties of the aggregates, an experimental program is set up to evaluate the physical and mechanical properties of RCA. This laboratory investigation included the measurement of compressive strength and workability of RCA, particle shape, water absorption, flakiness index, crushing value, deleterious materials and weak particles, wet/dry strength variation, and particle density. In addition, the comparison of RCA properties with virgin aggregates has been included as part of this investigation and this paper presents the results of these investigations on RCA, basalt, and the mix of RCA/basalt.Keywords: asphalt, basalt, pavement, recycled aggregate
Procedia PDF Downloads 164790 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network
Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman
Abstract:
We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights
Procedia PDF Downloads 115789 Minimum Wages and Its Impact on Agriculture and Non Agricultural Sectors with Special Reference to Recent Labour Reforms in India
Authors: Bikash Kumar Malick
Abstract:
Labour reform is a most celebrated theme for policy makers, at the same time it is also a most misunderstood and skeptical concept even for the educated masses in India. One of the widely focused and discussed topics which needs an in-depth examination is India’s labour laws. It may actually help to reach points to understand the exact requirements in labour reforms by making the labour laws more simple and concise in form and its implementation. It is also a requirement to guide states in India in terms of making laws on it as Indian Constitution itself is federal in form and unitary in spirit. Recently, Codes of Wages Bill has been introduced in Indian Parliament while other three codes are waiting to come in the same line and those codes actually highlight the simplified features of labour laws to enable labour reform in a succinct manner. However, it still brings more confusion in minds of people. To wipe out the confusion and to bring a note and to put it for correlation among the labour reforms of both centre and states which both generates employment and make growth sustainable in India providing clear public understanding. This time is also ripe minimizing the apprehension about all the coming labour laws simplified in different codes in India. This article attempts to highlight the need of labour reform and its possible impact. It also examines the higher rates of minimum wages and its links with its coverage agriculture and nonagricultural sectors (including mines) over the period time. It also takes into consideration of central sphere and in states sphere minimum wage which are linked with Consumer Price Index to bring into account the living standard of workers and to examine the cause and effect between minimum wage and output in both agriculture and non agricultural sector with regression analysis. Increase in minimum wage has actually strengthened the sustainable output.Keywords: codes of wages, indian constitution, minimum wage, labour laws, labour reforms
Procedia PDF Downloads 197788 Comparative Analysis of in vitro Release profile for Escitalopram and Escitalopram Loaded Nanoparticles
Authors: Rashi Rajput, Manisha Singh
Abstract:
Escitalopram oxalate (ETP), an FDA approved antidepressant drug from the category of SSRI (selective serotonin reuptake inhibitor) and is used in treatment of general anxiety disorder (GAD), major depressive disorder (MDD).When taken orally, it is metabolized to S-demethylcitalopram (S-DCT) and S-didemethylcitalopram (S-DDCT) in the liver with the help of enzymes CYP2C19, CYP3A4 and CYP2D6. Hence, causing side effects such as dizziness, fast or irregular heartbeat, headache, nausea etc. Therefore, targeted and sustained drug delivery will be a helpful tool for increasing its efficacy and reducing side effects. The present study is designed for formulating mucoadhesive nanoparticle formulation for the same Escitalopram loaded polymeric nanoparticles were prepared by ionic gelation method and characterization of the optimised formulation was done by zeta average particle size (93.63nm), zeta potential (-1.89mV), TEM (range of 60nm to 115nm) analysis also confirms nanometric size range of the drug loaded nanoparticles along with polydispersibility index of 0.117. In this research, we have studied the in vitro drug release profile for ETP nanoparticles, through a semi permeable dialysis membrane. The three important characteristics affecting the drug release behaviour were – particle size, ionic strength and morphology of the optimised nanoparticles. The data showed that on increasing the particle size of the drug loaded nanoparticles, the initial burst was reduced which was comparatively higher in drug. Whereas, the formulation with 1mg/ml chitosan in 1.5mg/ml tripolyphosphate solution showed steady release over the entire period of drug release. Then this data was further validated through mathematical modelling to establish the mechanism of drug release kinetics, which showed a typical linear diffusion profile in optimised ETP loaded nanoparticles.Keywords: ionic gelation, mucoadhesive nanoparticle, semi-permeable dialysis membrane, zeta potential
Procedia PDF Downloads 294787 Waste Heat Recovery System
Authors: A. Ramkumar, Anvesh Sagar, Preetham P. Karkera
Abstract:
Globalization in the modern era is dependent on the International logistics, the economic and reliable means is provided by the ocean going merchant vessel. The propulsion system which drives this massive vessels has gone through leaps and bounds of evolution. Most reliable system of propulsion adopted by the majority of vessels is by marine diesel engine. Since the first oil crisis of 1973, there is demand in increment of efficiency of main engine. Due to increase in the oil prices ship-operators explores for reduction in the operational cost of ship. And newly adopted IMO’s EEDI & SEEMP rules calls for the effective measures taken in this regard. The main engine of a ship suffers a lot of thermal losses, they mainly occur due to exhaust gas waste heat, radiation and cooling. So to increase the overall efficiency of system, we have to look into the solution to harnessing this waste energy of main engine to increase the fuel economy. During the course of research, engine manufacturers have developed many waste heat recovery systems. In our paper we see about additional options to harness this waste heat. The exhaust gas of engine coming out from the turbocharger still holds enough heat to go to the exhaust gas economiser to produce steam. This heat of exhaust gas can be used to heat a liquid of less boiling point after coming out from the turbocharger. The vapour of this secondary liquid can be superheated by a bypass exhaust or exhaust of turbocharger. This vapour can be utilized to rotate the turbine which is coupled to a generator. And the electric power for ship service can be produced with proper configuration of system. This can be included in PMS of ship. In this paper we seek to concentrate on power generation with use of exhaust gas. Thereby taking out the load on the main generator and increasing the efficiency of the system. This will help us to comply with the new rules of IMO. Our method helps to develop clean energy.Keywords: EEDI–energy efficiency design index, IMO–international maritime organization PMS-power management system, SEEMP–ship energy efficiency management plan
Procedia PDF Downloads 381786 Treatments for Overcoming Dormancy of Leucaena Seeds (Leucaena leucocephala)
Authors: Tiago Valente, Erico Lima, Bruno Deminicis, Andreia Cezario, Wallacy Santos, Fabiane Brito
Abstract:
Introduction: The Leucaena leucocephala known as leucaena is a perennial legume shrub of subtropical regions in which the forage shows favorable characteristics for livestock production. The objective of the study was to evaluate the influence of methods for overcoming dormancy the seeds of Leucaena leucocephala (Lam.). Materials and Methods: The number of germinated seeds was evaluated daily at the germination criterion radicle protrusion (growth, with about 2 cm long, the emerged seedlings of all). After the counting of the number of germinated seeds daily, the following characteristics were evaluated: Step 1: Germination count which represents the cumulative percentage of germinated seeds on the third day after the start of the test (Germ3); Step 2: Percentage of germinated seeds that correspond to the total percentage of seeds that germinate until the a seventh day after start of the test (Germ7); Step 3: Percentage of germinated seeds that correspond to the total percentage of seeds that germinate until the fifteenth day after start of the test (Germ15);Step 4: Germination speed index (GSI), which was calculated with number of germinated seeds to the nth observation; divided by number of days after sowing. Step 5: Total count of seeds do not germinate after 15 days (NGerm).The seed treatments were: (T1) water at 100 ºC/10 min; (T2) water at 100 ºC/1 min; (T3) Acetone (10 min); (T4) Ethyl alcohol (10 minutes); and (T5) intact seeds (control). Data were analyzed using a completely randomized design with eight replications, and it was adopted the Tukey test at 5% significance level. Results and Discussion: The treatment T1, had the highest speed of germination of seeds GSI, differed (P < 0.05). The T5 treatment (control) was the slowest response, between treatments until the seventh day after the beginning of the test (Germ7), with an amount of 20% accumulation of germinated seeds. The worst result of germination it was T5, with 30% of non-germinated seeds after 15 days of sowing. Acknowledgments: IFGoiano and CNPq (Brazil).Keywords: acetone, boiling water, germination, seed physiology
Procedia PDF Downloads 199785 The Hypolipidemic and Anti-Nephrotoxic Potentials of Vernonia calvoana Extract in Acetaminophen-Treated Male Wistar Rats
Authors: Godwin E. Egbung, Item J. Atangwho, Diana O. Odey, Eyong U. Eyong
Abstract:
Background of the study: The frequent abuse of acetaminophen by field workers in Calabar metropolis necessitated the present study on the hypolipidemic and anti-nephrotoxic potentials of Vernonia calvoana (VC) extract in acetaminophen (paracetamol) treated male albino Wistar rats Methods:. Thirty-five (35) male albino Wistar rats weighing 100-150 g were divided into five (5) groups of seven rats each. Group 1 served as normal control, group 2 received normal saline after treatment with Acetaminophen (PCM), group 3 was treated with VC extracts (200 mg/kg body weight), group 4 received VC extracts ( 400 mg/kg body weight) and group 5 was administered 100 mg/kg body weight of Vitamin E. At the end of the 21 days treatment period, the animals were sacrificed using chloroform vapours, and blood was collected via cardiac puncture and used for analyses of haematological as well as biochemical indices. Results: Results indicated significant decreases (p < 0.001) in LDL-c, TC and TG levels in groups 3,4 and 5 relative to both the control as well as group 2, the atherogenic index showed a significant decrease at p < 0.001) in all treated groups compared with control and PCM- treated group. However, both extracts treated groups and vitamin E treated group showed significant (p < 0.001) increase in HDL-c relative to the control and PCM treated group. Serum potassium concentration was significantly (p < 0.05 and 0.001) reduced across all the treated groups compared with control and PCM- treated groups. Group 4 showed significant (p < 0.001) increase in RBC count, Hb, and PCV compared with PCM- treated group. Conclusions: We therefore conclude that ethanolic leaf extract of VC possesses probable anti-anemic, hypolipidemic potentials, and also ameliorates serum electrolyte imbalance in paracetamol- induced toxicity.Keywords: acetaminophen, haematological indices, hypolipidemic potentials, serum lipid profile, vernonia calvoana, wistar rats
Procedia PDF Downloads 261784 Seed Yield and Quality of Late Planted Rabi Wheat Crop as Influenced by Basal and Foliar Application of Urea
Authors: Omvati Verma, Shyamashrre Roy
Abstract:
A field experiment was conducted with three basal nitrogen levels (90, 120 and 150 kg N/ha) and five foliar application of urea (absolute control, water spray, 3% urea spray at anthesis, 7 and 14 days after anthesis) at G.B. Pant University of Agriculture & Technology, Pantnagar, U.S. Nagar (Uttarakhand) during rabi season in a factorial randomized block design with three replications. Results revealed that nitrogen application of 150 kg/ha produced the highest seed yield, straw and biological yield and it was significantly superior to 90 kg N/ha and was at par with 120 kg N/ha. The number of tillers increased significantly with increase in nitrogen doses up to 150 kg N/ha. Spike length, number of grains per spike, grain weight per spike and thousand seed weight showed significantly higher values with 120 kg N/ha than 90 kg N/ha and were at par with that of 150 kg N/ha. Also, plant height showed similar trend. Leaf area index and chlorophyll content showed significant increase with an increase in nitrogen levels at different stages. In the case of foliar spray treatments, urea spray at anthesis showed highest value for yield and yield attributes. In case of spike length and thousand seed weight, it was similar with the urea spray at 7 and 14 days after anthesis, but for rest of the yield attributes, it was significantly higher than rest of the treatments. Among seed quality parameters protein and sedimentation value showed significant increase due to increase in nitrogen rates whereas, starch and hectolitre weight had a decreasing trend. Wet gluten content was not influenced by nitrogen levels. Foliar urea spray at anthesis resulted in highest value of protein and hectolitre weight whereas, urea spray at 7 days after anthesis showed highest value of sedimentation value and wet gluten content.Keywords: foliar application, nitrogenous fertilizer, seed quality, yield
Procedia PDF Downloads 279