Search results for: flow length
1041 An Assessment of Floodplain Vegetation Response to Groundwater Changes Using the Soil & Water Assessment Tool Hydrological Model, Geographic Information System, and Machine Learning in the Southeast Australian River Basin
Authors: Newton Muhury, Armando A. Apan, Tek N. Marasani, Gebiaw T. Ayele
Abstract:
The changing climate has degraded freshwater availability in Australia that influencing vegetation growth to a great extent. This study assessed the vegetation responses to groundwater using Terra’s moderate resolution imaging spectroradiometer (MODIS), Normalised Difference Vegetation Index (NDVI), and soil water content (SWC). A hydrological model, SWAT, has been set up in a southeast Australian river catchment for groundwater analysis. The model was calibrated and validated against monthly streamflow from 2001 to 2006 and 2007 to 2010, respectively. The SWAT simulated soil water content for 43 sub-basins and monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) were applied in the machine learning tool, Waikato Environment for Knowledge Analysis (WEKA), using two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The assessment shows that different types of vegetation response and soil water content vary in the dry and wet seasons. The WEKA model generated high positive relationships (r = 0.76, 0.73, and 0.81) between NDVI values of all vegetation in the sub-basins against soil water content (SWC), the groundwater flow (GW), and the combination of these two variables, respectively, during the dry season. However, these responses were reduced by 36.8% (r = 0.48) and 13.6% (r = 0.63) against GW and SWC, respectively, in the wet season. Although the rainfall pattern is highly variable in the study area, the summer rainfall is very effective for the growth of the grass vegetation type. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater
Procedia PDF Downloads 1051040 Deep Mill Level Zone (DMLZ) of Ertsberg East Skarn System, Papua; Correlation between Structure and Mineralization to Determined Characteristic Orebody of DMLZ Mine
Authors: Bambang Antoro, Lasito Soebari, Geoffrey de Jong, Fernandy Meiriyanto, Michael Siahaan, Eko Wibowo, Pormando Silalahi, Ruswanto, Adi Budirumantyo
Abstract:
The Ertsberg East Skarn System (EESS) is located in the Ertsberg Mining District, Papua, Indonesia. EESS is a sub-vertical zone of copper-gold mineralization hosted in both diorite (vein-style mineralization) and skarn (disseminated and vein style mineralization). Deep Mill Level Zone (DMLZ) is a mining zone in the lower part of East Ertsberg Skarn System (EESS) that product copper and gold. The Deep Mill Level Zone deposit is located below the Deep Ore Zone deposit between the 3125m to 2590m elevation, measures roughly 1,200m in length and is between 350 and 500m in width. DMLZ planned start mined on Q2-2015, being mined at an ore extraction rate about 60,000 tpd by the block cave mine method (the block cave contain 516 Mt). Mineralization and associated hydrothermal alteration in the DMLZ is hosted and enclosed by a large stock (The Main Ertsberg Intrusion) that is barren on all sides and above the DMLZ. Late porphyry dikes that cut through the Main Ertsberg Intrusion are spatially associated with the center of the DMLZ hydrothermal system. DMLZ orebody hosted in diorite and skarn, both dominantly by vein style mineralization. Percentage Material Mined at DMLZ compare with current Reserves are diorite 46% (with 0.46% Cu; 0.56 ppm Au; and 0.83% EqCu); Skarn is 39% (with 1.4% Cu; 0.95 ppm Au; and 2.05% EqCu); Hornfels is 8% (with 0.84% Cu; 0.82 ppm Au; and 1.39% EqCu); and Marble 7 % possible mined waste. Correlation between Ertsberg intrusion, major structure, and vein style mineralization is important to determine characteristic orebody in DMLZ Mine. Generally Deep Mill Level Zone has 2 type of vein filling mineralization from both hosted (diorite and skarn), in diorite hosted the vein system filled by chalcopyrite-bornite-quartz and pyrite, in skarn hosted the vein filled by chalcopyrite-bornite-pyrite and magnetite without quartz. Based on orientation the stockwork vein at diorite hosted and shallow vein in skarn hosted was generally NW-SE trending and NE-SW trending with shallow-moderate dipping. Deep Mill Level Zone control by two main major faults, geologist founded and verified local structure between major structure with NW-SE trending and NE-SW trending with characteristics slickenside, shearing, gauge, water-gas channel, and some has been re-healed.Keywords: copper-gold, DMLZ, skarn, structure
Procedia PDF Downloads 5061039 Haematology and Reproductive Performance of Pubertal Rabbit Do Administer Crude Moringa oleifera (LAM.) Leaf Extract
Authors: Ewuola E. O., Sokunbi O. A., Oyedemi O. M., Sanni K. M
Abstract:
Moringa oleifera leaf has been traditionally used in the local medicine as an ingredient in some herbal formulations for blood purifier, cholesterol reducing agent, immune and reproductive enhancers. Twenty-four pubertal rabbit are divided equally into four groups were administered with varied concentrations of crude extract of the leaves of Moringa oleifera gavage at doses of 2.5ml/kg body weight (BW) in every 48 hours for 63 days. These rabbits were allotted into four treatments and each treatment was replicated six times to investigate the effect of administered crude Moringa oleifera leaf extract (CMOLE) on haematology and reproductive performance of pubertal rabbit does. Four experimental treatments were used. The animals on the control (T1) were administered water only. Rabbits on treatments 2, 3, and 4 were administered 100ml CMOLE/L, 200ml CMOLE/L, and 300ml CMOLE/L, respectively. The does were placed on extract two weeks before mating, five weeks after mating and continued for another two weeks after kindling. Six proven untreated bucks were used for the mating of the twenty-four treated does and these bucks were randomly allotted to the does such that each buck mated at least one treated does in each treatment. The same management practices and experimental diets were given ad libitum to all animals. Blood was sampled from the gestating does at the third trimester for haematological analysis. The haematology results showed that treated rabbits with 100ml CMOLE/L with mean corpuscular volume value of 93.38fl significantly (p < 0.05) higher than those on the control which is water only (82.24fl) but not significantly different from T3 (200ml CMOLE/L) and T4 (300ml CMOLE/L) which had mean values of 91.69fl and 91.49fl, respectively. While the erythrocyte counts, leukocyte counts, haematocrit, haemoglobin concentration, mean corpuscular haemoglobin, mean corpuscular haemoglobin concentration, lymphocyte, neutrophil, monocyte, and eosinophil count were not significantly different across the treatments. For platelets, treated animals on T2 (100ml CMOLE/L) had the highest numerical value of 148.80 x 109/L which was identical with those on T3 (200ml CMOLE/L) with mean value of 141.50x109/L but significantly (p < 0.05) higher than those on T4 (300ml CMOLE/L) with mean value of 135.00 x 109/L and those on the control which had the least mean value of 126.60 x 109/L. The percentage conception rate of the treated animals was higher than those in the control group. The animals administered 300ml CMOLE/L had the apparently highest litter size of 5.75, while gestation length and litter weight tended to decline with increase in CMOLE concentrations The investigation demonstrated the potential effect of crude Moringa oleifera leaf extract on pubertal rabbit does. The administration of up to 300ml crude Moringa oleifera leaf extract per liter did not adversely affect but improved the haematological response and reproductive potential in gestating rabbit does.Keywords: conception, haematology, moringa leaf extract, rabbit does
Procedia PDF Downloads 5131038 Validation of the Recovery of House Dust Mites from Fabrics by Means of Vacuum Sampling
Authors: A. Aljohani, D. Burke, D. Clarke, M. Gormally, M. Byrne, G. Fleming
Abstract:
Introduction: House Dust Mites (HDMs) are a source of allergen particles embedded in textiles and furnishings. Vacuum sampling is commonly used to recover and determine the abundance of HDMs but the efficiency of this method is less than standardized. Here, the efficiency of recovery of HDMs was evaluated from home-associated textiles using vacuum sampling protocols.Methods/Approach: Living Mites (LMs) or dead Mites (DMs) House Dust Mites (Dermatophagoides pteronyssinus: FERA, UK) were separately seeded onto the surfaces of Smooth Cotton, Denim and Fleece (25 mites/10x10cm2 squares) and left for 10 minutes before vacuuming. Fabrics were vacuumed (SKC Flite 2 pump) at a flow rate of 14 L/min for 60, 90 or 120 seconds and the number of mites retained by the filter (0.4μm x 37mm) unit was determined. Vacuuming was carried out in a linear direction (Protocol 1) or in a multidirectional pattern (Protocol 2). Additional fabrics with LMs were also frozen and then thawed, thereby euthanizing live mites (now termed EMs). Results/Findings: While there was significantly greater (p=0.000) recovery of mites (76% greater) in fabrics seeded with DMs than LMs irrespective of vacuuming protocol or fabric type, the efficiency of recovery of DMs (72%-76%) did not vary significantly between fabrics. For fabrics containing EMs, recovery was greatest for Smooth Cotton and Denim (65-73% recovered) and least for Fleece (15% recovered). There was no significant difference (p=0.99) between the recovery of mites across all three mite categories from Smooth Cotton and Denim but significantly fewer (p=0.000) mites were recovered from Fleece. Scanning Electron Microscopy images of HMD-seeded fabrics showed that live mites burrowed deeply into the Fleece weave which reduced their efficiency of recovery by vacuuming. Research Implications: Results presented here have implications for the recovery of HDMs by vacuuming and the choice of fabric to ameliorate HDM-dust sensitization.Keywords: allergy, asthma, dead, fabric, fleece, live mites, sampling
Procedia PDF Downloads 1421037 Fibrin Glue Reinforcement of Choledochotomy Closure Suture Line for Prevention of Bile Leak in Patients Undergoing Laparoscopic Common Bile Duct Exploration with Primary Closure: A Pilot Study
Authors: Rahul Jain, Jagdish Chander, Anish Gupta
Abstract:
Introduction: Laparoscopic common bile duct exploration (LCBDE) allows cholecystectomy and the removal of common bile duct (CBD) stones to be performed during the same sitting, thereby decreasing hospital stay. CBD exploration through choledochotomy can be closed primarily with an absorbable suture material, but can lead to biliary leakage postoperatively. In this study we tried to find a solution to further lower the incidence of bile leakage by using fibrin glue to reinforce the sutures put on choledochotomy suture line. It has haemostatic and sealing action, through strengthening the last step of the physiological coagulation and biostimulation, which favours the formation of new tissue matrix. Methodology: This study was conducted at a tertiary care teaching hospital in New Delhi, India, from 2011 to 2013. 20 patients with CBD stones documented on MRCP with CBD diameter of 9 mm or more were included in this study. Patients were randomized into two groups namely Group A in which choledochotomy was closed with polyglactin 4-0 suture and suture line reinforced with fibrin glue, and Group ‘B’ in which choledochotomy was closed with polyglactin 4-0 suture alone. Both the groups were evaluated and compared on clinical parameters such as operative time, drain content, drain output, no. of days drain was required, blood loss & transfusion requirements, length of postoperative hospital stay and conversion to open surgery. Results: The operative time for Group A ranged from 60 to 210 min (mean 131.50 min) and Group B 65 to 300 min (mean 140 minutes). The blood loss in group A ranged from 10 to 120 ml (mean 51.50 ml), in group B it ranged from 10 to 200 ml (mean 53.50 ml). In Group A, there was no case of bile leak but there was bile leak in 2 cases in Group B, minimum 0 and maximum 900 ml with a mean of 97 ml and p value of 0.147 with no statistically significant difference in bile leak in test and control groups. The minimum and maximum serous drainage in Group A was nil & 80 ml (mean 11 ml) and in Group B was nil & 270 ml (mean 72.50 ml). The p value came as 0.028 which is statistically significant. Thus serous leakage in Group A was significantly less than in Group B. The drains in Group A were removed from 2 to 4 days (mean: 3 days) while in Group B from 2 to 9 days (mean: 3.9 days). The patients in Group A stayed in hospital post operatively from 3 to 8 days (mean: 5.30) while in Group B it ranged from 3 to 10 days with a mean of 5 days. Conclusion: Fibrin glue application on CBD decreases bile leakage but in statistically insignificant manner. Fibrin glue application on CBD can significantly decrease post operative serous drainage after LCBDE. Fibrin glue application on CBD is safe and easy technique without any significant adverse effects and can help less experienced surgeons performing LCBDE.Keywords: bile leak, fibrin glue, LCBDE, serous leak
Procedia PDF Downloads 2171036 The Effect of Mixing and Degassing Conditions on the Properties of Epoxy/Anhydride Resin System
Authors: Latha Krishnan, Andrew Cobley
Abstract:
Epoxy resin is most widely used as matrices for composites of aerospace, automotive and electronic applications due to its outstanding mechanical properties. These properties are chiefly predetermined by the chemical structure of the prepolymer and type of hardener but can also be varied by the processing conditions such as prepolymer and hardener mixing, degassing and curing conditions. In this research, the effect of degassing on the curing behaviour and the void occurrence is experimentally evaluated for epoxy /anhydride resin system. The epoxy prepolymer was mixed with an anhydride hardener and accelerator in an appropriate quantity. In order to investigate the effect of degassing on the curing behaviour and void content of the resin, the uncured resin samples were prepared using three different methods: 1) no degassing 2) degassing on prepolymer and 3) degassing on mixed solution of prepolymer and hardener with an accelerator. The uncured resins were tested in differential scanning calorimeter (DSC) to observe the changes in curing behaviour of the above three resin samples by analysing factors such as gel temperature, peak cure temperature and heat of reaction/heat flow in curing. Additionally, the completely cured samples were tested in DSC to identify the changes in the glass transition temperature (Tg) between the three samples. In order to evaluate the effect of degassing on the void content and morphology changes in the cured epoxy resin, the fractured surfaces of cured epoxy resin were examined under the scanning electron microscope (SEM). Also, the changes in the mechanical properties of the cured resin were studied by three-point bending test. It was found that degassing at different stages of resin mixing had significant effects on properties such as glass transition temperature, the void content and void size of the epoxy/anhydride resin system. For example, degassing (vacuum applied on the mixed resin) has shown higher glass transition temperature (Tg) with lower void content.Keywords: anhydride epoxy, curing behaviour, degassing, void occurrence
Procedia PDF Downloads 3541035 Assessment of the Landscaped Biodiversity in the National Park of Tlemcen (Algeria) Using Per-Object Analysis of Landsat Imagery
Authors: Bencherif Kada
Abstract:
In the forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape, and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification, that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction, and area of an object, etc.), and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify of the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak, and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants, and bare soils. Texture attributes seem to provide no useful information, while spatial attributes of shape and compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.Keywords: forest, oaks, remote sensing, diversity, shrublands
Procedia PDF Downloads 1321034 Multiple-Material Flow Control in Construction Supply Chain with External Storage Site
Authors: Fatmah Almathkour
Abstract:
Managing and controlling the construction supply chain (CSC) are very important components of effective construction project execution. The goals of managing the CSC are to reduce uncertainty and optimize the performance of a construction project by improving efficiency and reducing project costs. The heart of much SC activity is addressing risk, and the CSC is no different. The delivery and consumption of construction materials is highly variable due to the complexity of construction operations, rapidly changing demand for certain components, lead time variability from suppliers, transportation time variability, and disruptions at the job site. Current notions of managing and controlling CSC, involve focusing on one project at a time with a push-based material ordering system based on the initial construction schedule and, then, holding a tremendous amount of inventory. A two-stage methodology was proposed to coordinate the feed-forward control of advanced order placement with a supplier to a feedback local control in the form of adding the ability to transship materials between projects to improve efficiency and reduce costs. It focused on the single supplier integrated production and transshipment problem with multiple products. The methodology is used as a design tool for the CSC because it includes an external storage site not associated with one of the projects. The idea is to add this feature to a highly constrained environment to explore its effectiveness in buffering the impact of variability and maintaining project schedule at low cost. The methodology uses deterministic optimization models with objectives that minimizing the total cost of the CSC. To illustrate how this methodology can be used in practice and the types of information that can be gleaned, it is tested on a number of cases based on the real example of multiple construction projects in Kuwait.Keywords: construction supply chain, inventory control supply chain, transshipment
Procedia PDF Downloads 1241033 Polymeric Composites with Synergetic Carbon and Layered Metallic Compounds for Supercapacitor Application
Authors: Anukul K. Thakur, Ram Bilash Choudhary, Mandira Majumder
Abstract:
In this technologically driven world, it is requisite to develop better, faster and smaller electronic devices for various applications to keep pace with fast developing modern life. In addition, it is also required to develop sustainable and clean sources of energy in this era where the environment is being threatened by pollution and its severe consequences. Supercapacitor has gained tremendous attention in the recent years because of its various attractive properties such as it is essentially maintenance-free, high specific power, high power density, excellent pulse charge/discharge characteristics, exhibiting a long cycle-life, require a very simple charging circuit and safe operation. Binary and ternary composites of conducting polymers with carbon and other layered transition metal dichalcogenides have shown tremendous progress in the last few decades. Compared with bulk conducting polymer, these days conducting polymers have gained more attention because of their high electrical conductivity, large surface area, short length for the ion transport and superior electrochemical activity. These properties make them very suitable for several energy storage applications. On the other hand, carbon materials have also been studied intensively, owing to its rich specific surface area, very light weight, excellent chemical-mechanical property and a wide range of the operating temperature. These have been extensively employed in the fabrication of carbon-based energy storage devices and also as an electrode material in supercapacitors. Incorporation of carbon materials into the polymers increases the electrical conductivity of the polymeric composite so formed due to high electrical conductivity, high surface area and interconnectivity of the carbon. Further, polymeric composites based on layered transition metal dichalcogenides such as molybdenum disulfide (MoS2) are also considered important because they are thin indirect band gap semiconductors with a band gap around 1.2 to 1.9eV. Amongst the various 2D materials, MoS2 has received much attention because of its unique structure consisting of a graphene-like hexagonal arrangement of Mo and S atoms stacked layer by layer to give S-Mo-S sandwiches with weak Van-der-Waal forces between them. It shows higher intrinsic fast ionic conductivity than oxides and higher theoretical capacitance than the graphite.Keywords: supercapacitor, layered transition-metal dichalcogenide, conducting polymer, ternary, carbon
Procedia PDF Downloads 2621032 Determination of the Phosphate Activated Glutaminase Localization in the Astrocyte Mitochondria Using Kinetic Approach
Authors: N. V. Kazmiruk, Y. R. Nartsissov
Abstract:
Phosphate activated glutaminase (GA, E.C. 3.5.1.2) plays a key role in glutamine/glutamate homeostasis in mammalian brain, catalyzing the hydrolytic deamidation of glutamine to glutamate and ammonium ions. GA is mainly localized in mitochondria, where it has the catalytically active form on the inner mitochondrial membrane (IMM) and the other soluble form, which is supposed to be dormant. At present time, the exact localization of the membrane glutaminase active site remains a controversial and an unresolved issue. The first hypothesis called c-side localization suggests that the catalytic site of GA faces the inter-membrane space and products of the deamidation reaction have immediate access to cytosolic metabolism. According to the alternative m-side localization hypothesis, GA orients to the matrix, making glutamate and ammonium available for the tricarboxylic acid cycle metabolism in mitochondria directly. In our study, we used a multi-compartment kinetic approach to simulate metabolism of glutamate and glutamine in the astrocytic cytosol and mitochondria. We used physiologically important ratio between the concentrations of glutamine inside the matrix of mitochondria [Glnₘᵢₜ] and glutamine in the cytosol [Glncyt] as a marker for precise functioning of the system. Since this ratio directly depends on the mitochondrial glutamine carrier (MGC) flow parameters, key observation was to investigate the dependence of the [Glnmit]/[Glncyt] ratio on the maximal velocity of MGC at different initial concentrations of mitochondrial glutamate. Another important task was to observe the similar dependence at different inhibition constants of the soluble GA. The simulation results confirmed the experimental c-side localization hypothesis, in which the glutaminase active site faces the outer surface of the IMM. Moreover, in the case of such localization of the enzyme, a 3-fold decrease in ammonium production was predicted.Keywords: glutamate metabolism, glutaminase, kinetic approach, mitochondrial membrane, multi-compartment modeling
Procedia PDF Downloads 1231031 Econophysical Approach on Predictability of Financial Crisis: The 2001 Crisis of Turkey and Argentina Case
Authors: Arzu K. Kamberli, Tolga Ulusoy
Abstract:
Technological developments and the resulting global communication have made the 21st century when large capitals are moved from one end to the other via a button. As a result, the flow of capital inflows has accelerated, and capital inflow has brought with it crisis-related infectiousness. Considering the irrational human behavior, the financial crisis in the world under the influence of the whole world has turned into the basic problem of the countries and increased the interest of the researchers in the reasons of the crisis and the period in which they lived. Therefore, the complex nature of the financial crises and its linearly unexplained structure have also been included in the new discipline, econophysics. As it is known, although financial crises have prediction mechanisms, there is no definite information. In this context, in this study, using the concept of electric field from the electrostatic part of physics, an early econophysical approach for global financial crises was studied. The aim is to define a model that can take place before the financial crises, identify financial fragility at an earlier stage and help public and private sector members, policy makers and economists with an econophysical approach. 2001 Turkey crisis has been assessed with data from Turkish Central Bank which is covered between 1992 to 2007, and for 2001 Argentina crisis, data was taken from IMF and the Central Bank of Argentina from 1997 to 2007. As an econophysical method, an analogy is used between the Gauss's law used in the calculation of the electric field and the forecasting of the financial crisis. The concept of Φ (Financial Flux) has been adopted for the pre-warning of the crisis by taking advantage of this analogy, which is based on currency movements and money mobility. For the first time used in this study Φ (Financial Flux) calculations obtained by the formula were analyzed by Matlab software, and in this context, in 2001 Turkey and Argentina Crisis for Φ (Financial Flux) crisis of values has been confirmed to give pre-warning.Keywords: econophysics, financial crisis, Gauss's Law, physics
Procedia PDF Downloads 1581030 Study on the Integration Schemes and Performance Comparisons of Different Integrated Solar Combined Cycle-Direct Steam Generation Systems
Authors: Liqiang Duan, Ma Jingkai, Lv Zhipeng, Haifan Cai
Abstract:
The integrated solar combined cycle (ISCC) system has a series of advantages such as increasing the system power generation, reducing the cost of solar power generation, less pollutant and CO2 emission. In this paper, the parabolic trough collectors with direct steam generation (DSG) technology are considered to replace the heat load of heating surfaces in heat regenerator steam generation (HRSG) of a conventional natural gas combined cycle (NGCC) system containing a PG9351FA gas turbine and a triple pressure HRSG with reheat. The detailed model of the NGCC system is built in ASPEN PLUS software and the parabolic trough collectors with DSG technology is modeled in EBSILON software. ISCC-DSG systems with the replacement of single, two, three and four heating surfaces are studied in this paper. Results show that: (1) the ISCC-DSG systems with the replacement heat load of HPB, HPB+LPE, HPE2+HPB+HPS, HPE1+HPE2+ HPB+HPS are the best integration schemes when single, two, three and four stages of heating surfaces are partly replaced by the parabolic trough solar energy collectors with DSG technology. (2) Both the changes of feed water flow and the heat load of the heating surfaces in ISCC-DSG systems with the replacement of multi-stage heating surfaces are smaller than those in ISCC-DSG systems with the replacement of single heating surface. (3) ISCC-DSG systems with the replacement of HPB+LPE heating surfaces can increase the solar power output significantly. (4) The ISCC-DSG systems with the replacement of HPB heating surfaces has the highest solar-thermal-to-electricity efficiency (47.45%) and the solar radiation energy-to-electricity efficiency (30.37%), as well as the highest exergy efficiency of solar field (33.61%).Keywords: HRSG, integration scheme, parabolic trough collectors with DSG technology, solar power generation
Procedia PDF Downloads 2621029 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine
Procedia PDF Downloads 151028 Computational Insights Into Allosteric Regulation of Lyn Protein Kinase: Structural Dynamics and Impacts of Cancer-Related Mutations
Authors: Mina Rabipour, Elena Pallaske, Floyd Hassenrück, Rocio Rebollido-Rios
Abstract:
Protein tyrosine kinases, including Lyn kinase of the Src family kinases (SFK), regulate cell proliferation, survival, and differentiation. Lyn kinase has been implicated in various cancers, positioning it as a promising therapeutic target. However, the conserved ATP-binding pocket across SFKs makes developing selective inhibitors challenging. This study aims to address this limitation by exploring the potential for allosteric modulation of Lyn kinase, focusing on how its structural dynamics and specific oncogenic mutations impact its conformation and function. To achieve this, we combined homology modeling, molecular dynamics simulations, and data science techniques to conduct microsecond-length simulations. Our approach allowed a detailed investigation into the interplay between Lyn’s catalytic and regulatory domains, identifying key conformational states involved in allosteric regulation. Additionally, we evaluated the structural effects of Dasatinib, a competitive inhibitor, and ATP binding on Lyn active conformation. Notably, our simulations show that cancer-related mutations, specifically I364L/N and E290D/K, shift Lyn toward an inactive conformation, contrasting with the active state of the wild-type protein. This may suggest how these mutations contribute to aberrant signaling in cancer cells. We conducted a dynamical network analysis to assess residue-residue interactions and the impact of mutations on the Lyn intramolecular network. This revealed significant disruptions due to mutations, especially in regions distant from the ATP-binding site. These disruptions suggest potential allosteric sites as therapeutic targets, offering an alternative strategy for Lyn inhibition with higher specificity and fewer off-target effects compared to ATP-competitive inhibitors. Our findings provide insights into Lyn kinase regulation and highlight allosteric sites as avenues for selective drug development. Targeting these sites may modulate Lyn activity in cancer cells, reducing toxicity and improving outcomes. Furthermore, our computational strategy offers a scalable approach for analyzing other SFK members or kinases with similar properties, facilitating the discovery of selective allosteric modulators and contributing to precise cancer therapies.Keywords: lyn tyrosine kinase, mutation analysis, conformational changes, dynamic network analysis, allosteric modulation, targeted inhibition
Procedia PDF Downloads 231027 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process
Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei
Abstract:
Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.Keywords: Iranian processed cheese, emulsifying salt, rheology, texture
Procedia PDF Downloads 4341026 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots
Authors: Mrinalini Ranjan, Sudheesh Chethil
Abstract:
Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots
Procedia PDF Downloads 1801025 Mapping Forest Biodiversity Using Remote Sensing and Field Data in the National Park of Tlemcen (Algeria)
Authors: Bencherif Kada
Abstract:
In forest management practice, landscape and Mediterranean forest are never posed as linked objects. But sustainable forestry requires the valorization of the forest landscape and this aim involves assessing the spatial distribution of biodiversity by mapping forest landscaped units and subunits and by monitoring the environmental trends. This contribution aims to highlight, through object-oriented classifications, the landscaped biodiversity of the National Park of Tlemcen (Algeria). The methodology used is based on ground data and on the basic processing units of object-oriented classification that are segments, so-called image-objects, representing a relatively homogenous units on the ground. The classification of Landsat Enhanced Thematic Mapper plus (ETM+) imagery is performed on image objects, and not on pixels. Advantages of object-oriented classification are to make full use of meaningful statistic and texture calculation, uncorrelated shape information (e.g., length-to-width ratio, direction and area of an object, etc.) and topological features (neighbor, super-object, etc.), and the close relation between real-world objects and image objects. The results show that per object classification using the k-nearest neighbor’s method is more efficient than per pixel one. It permits to simplify the content of the image while preserving spectrally and spatially homogeneous types of land covers such as Aleppo pine stands, cork oak groves, mixed groves of cork oak, holm oak and zen oak, mixed groves of holm oak and thuja, water plan, dense and open shrub-lands of oaks, vegetable crops or orchard, herbaceous plants and bare soils. Texture attributes seem to provide no useful information while spatial attributes of shape, compactness seem to be performant for all the dominant features, such as pure stands of Aleppo pine and/or cork oak and bare soils. Landscaped sub-units are individualized while conserving the spatial information. Continuously dominant dense stands over a large area were formed into a single class, such as dense, fragmented stands with clear stands. Low shrublands formations and high wooded shrublands are well individualized but with some confusion with enclaves for the former. Overall, a visual evaluation of the classification shows that the classification reflects the actual spatial state of the study area at the landscape level.Keywords: forest, oaks, remote sensing, biodiversity, shrublands
Procedia PDF Downloads 381024 Endoscopic Stenting of the Main Pancreatic Duct in Patients With Pancreatic Fluid Collections After Pancreas Transplantation
Authors: Y. Teterin, S. Suleymanova, I. Dmitriev, P. Yartcev
Abstract:
Introduction: One of the most common complications after pancreas transplantation are pancreatic fluid collections (PFCs), which are often complicated not only by infection and subsequent disfunction of the pancreatoduodenal graft (PDG), but also with a rather high mortality rate of recipients. Drainage is not always effective and often requires repeated open surgical interventions, which worsens the outcome of the surgery. Percutaneous drainage of PFCs combined with endoscopic stenting of the main pancreatic duct of the pancreatoduodenal graft (MPDPDG) showed high efficiency in the treatment of PFCs. Aims & Methods: From 01.01.2012 to 31.12.2021 at the Sklifosovsky Research Institute for Emergency Medicine were performed 64 transplantations of PDG. In 11 cases (17.2%), the early postoperative period was complicated by the formation of PFCs. Of these, 7 patients underwent percutaneous drainage of pancreonecrosis with high efficiency and did not required additional methods of treatment. In the remaining 4 patients, drainage was ineffective and was an indication for endoscopic stenting of the MPDPDG. They were the ones who made up the study group. Among them were 3 men and 1 woman. The mean age of the patients was 36,4 years.PFCs in these patients formed on days 1, 12, 18, and 47 after PDG transplantation. We used a gastroscope to stent the MPDPDG, due to anatomical features of the location of the duodenoduodenal anastomosis after PDG transplantation. Through the endoscope channel was performed selective catheterization of the MPDPDG, using a catheter and a guidewire, followed by its contrasting with a water-soluble contrast agent. Due to the extravasation of the contrast, was determined the localization of the defect in the PDG duct system. After that, a plastic pancreatic stent with a diameter of 7 Fr. and a length of 7 cm. was installed along guidewire. The stent was installed in such a way that its proximal edge completely covered the defect zone, and the distal one was determined in the intestinal lumen. Results: In all patients PDG pancreaticography revealed extravasation of a contrast in the area of the isthmus and body of the pancreas, which required stenting of the MPDPDG. In 1 (25%) case, the patient had a dislocation of the stent into the intestinal lumen (III degree according to Clavien-Dindo (2009)). This patient underwent repeated endoscopic stenting of the MPDPDG. On average 23 days after endoscopic stenting of the MPDPDG, the drainage tubes were removed and after approximately 40 days all patients were discharged in a satisfactory condition with follow-up endocrinologist and surgeon consultation. Pancreatic stents were removed after 6 months ± 7 days. Conclusion: Endoscopic stenting of the main pancreatic duct of the donor pancreas is by far the most highly effective and minimally invasive method in the treatment of PFCs after transplantation of the pancreatoduodenal complex.Keywords: pancreas transplantation, endoscopy surgery, diabetes, stenting, main pancreatic duct
Procedia PDF Downloads 901023 Airborne Pollutants and Lung Surfactant: Biophysical Impacts of Surface Oxidation Reactions
Authors: Sahana Selladurai, Christine DeWolf
Abstract:
Lung surfactant comprises a lipid-protein film that coats the alveolar surface and serves to prevent alveolar collapse upon repeated breathing cycles. Exposure of lung surfactant to high concentrations of airborne pollutants, for example tropospheric ozone in smog, can chemically modify the lipid and protein components. These chemical changes can impact the film functionality by decreasing the film’s collapse pressure (minimum surface tension attainable), altering it is mechanical and flow properties and modifying lipid reservoir formation essential for re-spreading of the film during the inhalation process. In this study, we use Langmuir monolayers spread at the air-water interface as model membranes where the compression and expansion of the film mimics the breathing cycle. The impact of ozone exposure on model lung surfactant films is measured using a Langmuir film balance, Brewster angle microscopy and a pendant drop tensiometer as a function of film and sub-phase composition. The oxidized films are analyzed using mass spectrometry where lipid and protein oxidation products are observed. Oxidation is shown to reduce surface activity, alter line tension (and film morphology) and in some cases visibly reduce the viscoelastic properties of the film when compared to controls. These reductions in functionality of the films are highly dependent on film and sub-phase composition, where for example, the effect of oxidation is more pronounced when using a physiologically relevant buffer as opposed to water as the sub-phase. These findings can lead to a better understanding on the impact of continuous exposure to high levels of ozone on the mechanical process of breathing, as well as understanding the roles of certain lung surfactant components in this process.Keywords: lung surfactant, oxidation, ozone, viscoelasticity
Procedia PDF Downloads 3131022 Development of Integrated Solid Waste Management Plan for Industrial Estates of Pakistan
Authors: Mehak Masood
Abstract:
This paper aims to design an integrated solid waste management plan for industrial estates taking Sundar Industrial Estate as case model. The issue of solid waste management is on the rise in Pakistan especially in the industrial sector. In this regard, the concept of development and establishment of industrial estates is gaining popularity nowadays. Without proper solid waste management plan it is very difficult to manage day to day affairs of industrial estates. An industrial estate contains clusters of different types of industrial units. It is necessary to identify different types of solid waste streams from each industrial cluster within the estate. In this study, Sundar Industrial Estate was taken as a case model. Primary and secondary data collection, waste assessment, waste segregation and weighing and field surveys were essential elements of the study. Wastes from each industrial process were identified and quantified. Currently 130 industries are in production but after full colonization of industries this number would reach 385. Elaborated process flow diagrams were made to characterize the recyclable and non-recyclables waste. From the study it was calculated that about 12354.1 kg/captia/day of solid waste is being generated in Sundar Industrial Estate. After the full colonization of the industrial estate, the estimated quantity will be 4756328.5 kg/captia/day. Furthermore, solid waste generated from each industrial sector was estimated. Suggestions for collection and transportation are given. Environment friendly solid waste management practices are suggested. If an effective integrated waste management system is developed and implemented it will conserve resources, create jobs, reduce poverty, conserve natural resources, protect the environment, save collection, transportation and disposal costs and extend the life of disposal sites. A major outcome of this study is an integrated solid waste management plan for the Sundar Industrial Estate which requires immediate implementation.Keywords: integrated solid waste management plan, industrial estates, Sundar Industrial Estate, Pakistan
Procedia PDF Downloads 4931021 Uranium Migration Process: A Multi-Technique Investigation Strategy for a Better Understanding of the Role of Colloids
Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes
Abstract:
The knowledge of uranium migration processes within underground environments is a major issue in the environmental risk assessment associated with nuclear activities. This process is identified as strongly controlled by adsorption mechanisms, thus leading to strongly delayed migration paths. Colloidal ligands are likely to significantly increase the mobility of uranium in natural environments. The ability of colloids to mobilize and transport uranium depends on their origin, their nature, their structure, their stability and their reactivity with uranium. Thus, the colloidal mobilization and transport properties are often described as site-specific. In this work, the colloidal phases of two leachates obtained from two different horizons of the same podzolic soil were characterized with a speciation approach. For this purpose, a multi-technique strategy was used, based on Field-Flow Fractionation coupled to Ultraviolet, Multi-Angle Light Scattering and Inductively Coupled Plasma Mass Spectrometry (AF4-UV-MALS-ICPMS), Transmission Electron Microscopy (TEM), Electrospray Ionization Orbitrap Mass Spectrometry (ESI-Orbitrap), and Time-Resolved Laser Fluorescence Spectroscopy (TRLFS-EEM). Thus, elemental composition, size distribution, microscopic structure, colloidal stability and possible organic and/or inorganic content of colloids were determined, as well as their association with uranium. The leachates exhibit differences in their physical and chemical characteristics, mainly in the nature of organic matter constituents. The multi-technique investigation strategy used provides original data about colloidal phase structure and composition, offering a new vision of the way the uranium can be mobilized and transported in the considered soil. This information is a real significant contribution opening the way to our understanding and predicting of the colloidal transport.Keywords: colloids, migration, multi-technique, speciation, transport, uranium
Procedia PDF Downloads 1451020 Redesigning the Plant Distribution of an Industrial Laundry in Arequipa
Authors: Ana Belon Hercilla
Abstract:
The study is developed in “Reactivos Jeans” company, in the city of Arequipa, whose main business is the laundry of garments at an industrial level. In 2012 the company initiated actions to provide a dry cleaning service of alpaca fiber garments, recognizing that this item is in a growth phase in Peru. Additionally this company took the initiative to use a new greenwashing technology which has not yet been developed in the country. To accomplish this, a redesign of both the process and the plant layout was required. For redesigning the plant, the methodology used was the Systemic Layout Planning, allowing this study divided into four stages. First stage is the information gathering and evaluation of the initial situation of the company, for which a description of the areas, facilities and initial equipment, distribution of the plant, the production process and flows of major operations was made. Second stage is the development of engineering techniques that allow the logging and analysis procedures, such as: Flow Diagram, Route Diagram, DOP (process flowchart), DAP (analysis diagram). Then the planning of the general distribution is carried out. At this stage, proximity factors of the areas are established, the Diagram Paths (TRA) is developed, and the Relational Diagram Activities (DRA). In order to obtain the General Grouping Diagram (DGC), further information is complemented by a time study and Guerchet method is used to calculate the space requirements for each area. Finally, the plant layout redesigning is presented and the implementation of the improvement is made, making it possible to obtain a model much more efficient than the initial design. The results indicate that the implementation of the new machinery, the adequacy of the plant facilities and equipment relocation resulted in a reduction of the production cycle time by 75.67%, routes were reduced by 68.88%, the number of activities during the process were reduced by 40%, waits and storage were removed 100%.Keywords: redesign, time optimization, industrial laundry, greenwashing
Procedia PDF Downloads 3961019 The Status of the Actio Popularis under International Environmental Law in Cases of Damage to Global Commons
Authors: Aimite Jorge, Leenekela Usebiu
Abstract:
In recent years the International Community has seen a rise of what can be termed as ‘actio popularis”;that is to say lawsuits brought by third parties in the interest of the public or the world community as a whole, such as in cases of genocide and terrorism prosecutions under international law. It is equally clear that under current globalized world the effect of multinational activities on the environment is often felt beyond the borders of the territories where they operate. Equally true is the fact that the correspondence of citizens self-determination with national government is increasingly upset by the increasing willingness of states to share some ‘sovereign powers’ in order to address new economic, environmental and security interdependencies. The ‘unbundling’ of functional governance from fixed territories sees continuously citizens give up their formal approval of key decisions in exchange for a more remote, indirect say in supra-national or international decision-making bodies. The efforts to address a growing transnational flow of ecological harm are at the forefront of such indirect transformations, as evidenced by a proliferation of multilateral environmental agreements (MEAs) over the past three decades. However, unlike the defence of the global commons in cases of terrorism and genocide, there is still to be a clear application of action popularis in the case of environment, despite acknowledgement that the effect of the activities of several multinationals on the environment is as destructive to the global commons as genocide or terrorism are. Thus, this paper looking at specific cases of harmful degradation of the environment by certain multinationals transcending national boundaries, argues that it is high-time for a serious consideration of the application of the actio-popularis to environmental concerns. Although it is acknowledged that in international environmental law the challenge to reach a “critical mass” of recognition and support for an ‘actio-popularis’ for environment damage is particularly demanding, it is worth the try.Keywords: actio popularis in environment law, global commons, transnational environmental damage, law and environment
Procedia PDF Downloads 5741018 Determination of Unsaturated Soil Permeability Based on Geometric Factor Development of Constant Discharge Model
Authors: A. Rifa’i, Y. Takeshita, M. Komatsu
Abstract:
After Yogyakarta earthquake in 2006, the main problem that occurred in the first yard of Prambanan Temple is ponding area that occurred after rainfall. Soil characterization needs to be determined by conducting several processes, especially permeability coefficient (k) in both saturated and unsaturated conditions to solve this problem. More accurate and efficient field testing procedure is required to obtain permeability data that present the field condition. One of the field permeability test equipment is Constant Discharge procedure to determine the permeability coefficient. Necessary adjustments of the Constant Discharge procedure are needed to be determined especially the value of geometric factor (F) to improve the corresponding value of permeability coefficient. The value of k will be correlated with the value of volumetric water content (θ) of an unsaturated condition until saturated condition. The principle procedure of Constant Discharge model provides a constant flow in permeameter tube that flows into the ground until the water level in the tube becomes constant. Constant water level in the tube is highly dependent on the tube dimension. Every tube dimension has a shape factor called the geometric factor that affects the result of the test. Geometric factor value is defined as the characteristic of shape and radius of the tube. This research has modified the geometric factor parameters by using empty material tube method so that the geometric factor will change. Saturation level is monitored by using soil moisture sensor. The field test results were compared with the results of laboratory tests to validate the results of the test. Field and laboratory test results of empty tube material method have an average difference of 3.33 x 10-4 cm/sec. The test results showed that modified geometric factor provides more accurate data. The improved methods of constant discharge procedure provide more relevant results.Keywords: constant discharge, geometric factor, permeability coefficient, unsaturated soils
Procedia PDF Downloads 2981017 Television and Virtual Public Sphere: A Study on Malayali Tribes in Salem District, Tamil Nadu
Authors: P. Viduthalai, A. K. Divakar, V. Natarajan
Abstract:
Media is one of the powerful tools that manipulate the world in numerous aspects especially in the form of a communication process. For instance, the concept of the public sphere, which was earlier represented by landlords and elites has now transformed into a virtual public sphere, which is also represented by marginalized people. Unfortunately, this acquisition is still paradoxical. Though the media proliferation and its effects are humongous, still it has not been the same throughout the world. Inequality in access to media has created a technological divide among people. Finally, globalization and approach by the government towards using media for development communication has significantly changed the way in which the media reaches every nook and corner. Monarchy, oligarchy, republic and democracy together form the basis of most governments of the world. Of which, democracy is the one with the highest involvement and participation of the people. Ideally, the participation of the people is what, that keeps the democracy running. A healthy democracy is possible only when people are able to access information that makes citizens responsible and serves to check the functioning of their elected representatives. On one side the media consumption of people plays a crucial role in the formation of the public sphere, and on the other side, big media conglomerates are a serious threat to community participation, which is a goal that the media should strive for in a country like India. How different people consume these different media, differs greatly from length and breadth of the country. Another aspect of this media consumption is that it isn’t passive. People usage and consumption of media are related with the gratification that they derive from the particular media. This aspect varies from person to person and from society to society according to both internal and external factors. This article sets out from the most underlying belief that Malayali Tribes have adopted television and becomes a part of daily life and a day never passes without it especially after the introduction of Free Television Scheme by the past state government. Though they are living in hilly and socially isolated places, they too have started accessing media for understanding about the people of the plains and their culture, dictated by their interest. Many of these interests appear to have a social and psychological origin. The present research attempts to study how gratification of these needs lead Malayali Tribes to form such a virtual public sphere where they could communicate with people of the plains. Data was collected through survey method, from 300 respondents on “Exposure towards Television and their perception”. Conventional anthropological methods like unstructured interviews were also used to supplement the data collection efforts in the three taluks namely Yercaud, Pethanayankkanpalayam and Panamaraththuppatty in Salem district of TamilNadu. The results highlight the role of Television in gratifying needs of the Malayali Tribes.Keywords: democracy, gratification, Malayali Tribes and television, virtual public sphere
Procedia PDF Downloads 2571016 3D-Printing of Waveguide Terminations: Effect of Material Shape and Structuring on Their Characteristics
Authors: Lana Damaj, Vincent Laur, Azar Maalouf, Alexis Chevalier
Abstract:
Matched termination is an important part of the passive waveguide components. It is typically used at the end of a waveguide transmission line to prevent reflections and improve signal quality. Waveguide terminations (loads) are commonly used in microwave and RF applications. In traditional microwave architectures, usually, waveguide termination consists of a standard rectangular waveguide made by a lossy resistive material, and ended by shorting metallic plate. These types of terminations are used, to dissipate the energy as heat. However, these terminations may increase the size and the weight of the overall system. New alternative solution consists in developing terminations based on 3D-printing of materials. Designing such terminations is very challenging since it should meet the requirements imposed by the system. These requirements include many parameters such as the absorption, the power handling capability in addition to the cost, the size and the weight that have to be minimized. 3D-printing is a shaping process that enables the production of complex geometries. It allows to find best compromise between requirements. In this paper, a comparison study has been made between different existing and new shapes of waveguide terminations. Indeed, 3D printing of absorbers makes it possible to study not only standard shapes (wedge, pyramid, tongue) but also more complex topologies such as exponential ones. These shapes have been designed and simulated using CST MWS®. The loads have been printed using the carbon-filled PolyLactic Acid, conductive PLA from ProtoPasta. Since the terminations has been characterized in the X-band (from 8GHz to 12GHz), the rectangular waveguide standard WR-90 has been selected. The classical wedge shape has been used as a reference. First, all loads have been simulated with the same length and two parameters have been compared: the absorption level (level of |S11|) and the dissipated power density. This study shows that the concave exponential pyramidal shape has the better absorption level and the convex exponential pyramidal shape has the better dissipated power density level. These two loads have been printed in order to measure their properties. A good agreement between the simulated and measured reflection coefficient has been obtained. Furthermore, a study of material structuring based on the honeycomb hexagonal structure has been investigated in order to vary the effective properties. In the final paper, the detailed methodology and the simulated and measured results will be presented in order to show how 3D-printing can allow controlling mass, weight, absorption level and power behaviour.Keywords: additive manufacturing, electromagnetic composite materials, microwave measurements, passive components, power handling capacity (PHC), 3D-printing
Procedia PDF Downloads 251015 Electrochemical Biosensor for the Detection of Botrytis spp. in Temperate Legume Crops
Authors: Marzia Bilkiss, Muhammad J. A. Shiddiky, Mostafa K. Masud, Prabhakaran Sambasivam, Ido Bar, Jeremy Brownlie, Rebecca Ford
Abstract:
A greater achievement in the Integrated Disease Management (IDM) to prevent the loss would result from early diagnosis and quantitation of the causal pathogen species for accurate and timely disease control. This could significantly reduce costs to the growers and reduce any flow on impacts to the environment from excessive chemical spraying. Necrotrophic fungal disease botrytis grey mould, caused by Botrytis cinerea and Botrytis fabae, significantly reduce temperate legume yield and grain quality during favourable environmental condition in Australia and worldwide. Several immunogenic and molecular probe-type protocols have been developed for their diagnosis, but these have varying levels of species-specificity, sensitivity, and consequent usefulness within the paddock. To substantially improve speed, accuracy, and sensitivity, advanced nanoparticle-based biosensor approaches have been developed. For this, two sets of primers were designed for both Botrytis cinerea and Botrytis fabae which have shown the species specificity with initial sensitivity of two genomic copies/µl in pure fungal backgrounds using multiplexed quantitative PCR. During further validation, quantitative PCR detected 100 spores on artificially infected legume leaves. Simultaneously an electro-catalytic assay was developed for both target fungal DNA using functionalised magnetic nanoparticles. This was extremely sensitive, able to detect a single spore within a raw total plant nucleic acid extract background. We believe that the translation of this technology to the field will enable quantitative assessment of pathogen load for future accurate decision support of informed botrytis grey mould management.Keywords: biosensor, botrytis grey mould, sensitive, species specific
Procedia PDF Downloads 1771014 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 2711013 Managing Physiological and Nutritional Needs of Rugby Players in Kenya
Authors: Masita Mokeira, Kimani Rita, Obonyo Brian, Kwenda Kennedy, Mugambi Purity, Kirui Joan, Chomba Eric, Orwa Daniel, Waiganjo Peter
Abstract:
Rugby is a highly intense and physical game requiring speed and strength. The need for physical fitness therefore cannot be over-emphasized. Sports are no longer about lifting weights so as to build muscle. Most professional teams are investing much more in the sport in terms of time, equipment and other resources. To play competitively, Kenyan players may therefore need to complement their ‘home-grown’ and sometimes ad-hoc training and nutrition regimes with carefully measured strength and conditioning, diet, nutrition, and supplementation. Nokia Research Center and University of Nairobi conducted an exploratory study on needs and behaviours surrounding sports in Africa. Rugby being one sport that is gaining ground in Kenya was selected as the main focus. The end goal of the research was to identify areas where mobile technology could be used to address gaps, challenges and/or unmet needs. Themes such as information gap, social culture, growth, and development, revenue flow, and technology adoption among others emerged about the sport. From the growth and development theme, it was clear that as rugby continues to grow in the country, teams, coaches, and players are employing interesting techniques both in training and playing. Though some of these techniques are indeed scientific, those employing them are sometimes not fully aware of their scientific basis. A further case study on sports science in rugby in Kenya focusing on physical fitness and nutrition revealed interesting findings. This paper discusses findings on emerging adoption of techniques in managing physiological and nutritional needs of rugby players across different levels of rugby in Kenya namely high school, club and national levels.Keywords: rugby, nutrition, physiological needs, sports science
Procedia PDF Downloads 3941012 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction
Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong
Abstract:
Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.Keywords: data refinement, machine learning, mutual information, short-term latency prediction
Procedia PDF Downloads 172