Search results for: savings rate
440 Mental Well-Being and Quality of Life: A Comparative Study of Male Leather Tannery and Non-Tannery Workers of Kanpur City, India
Authors: Gyan Kashyap, Shri Kant Singh
Abstract:
Improved mental health can be articulated as a good physical health and quality of life. Mental health plays an important role in survival of any one’s life. In today’s time people living with stress in life due to their personal matters, health problems, unemployment, work environment, living environment, substance use, life style and many more important reasons. Many studies confirmed that the significant proportion of mental health people increasing in India. This study is focused on mental well-being of male leather tannery workers in Kanpur city, India. Environment at work place as well as living environment plays an important health risk factors among leather tannery workers. Leather tannery workers are more susceptible to many chemicals and physical hazards, just because they are liable to be affected by their exposure to lots of hazardous materials and processes during tanning work in very hazardous work environment. The aim of this study to determine the level of mental health disorder and quality of life among male leather tannery and non-tannery workers in Kanpur city, India. This study utilized the primary data from the cross- sectional household study which was conducted from January to June, 2015 on tannery and non-tannery workers as a part of PhD program from the Jajmau area of Kanpur city, India. The sample of 286 tannery and 295 non-tannery workers has been collected from the study area. We have collected information from the workers of age group 15-70 those who were working at the time of survey for at least one year. This study utilized the general health questionnaire (GHQ-12) and work related stress scale to test the mental wellbeing of male tannery and non-tannery workers. By using GHQ-12 and work related stress scale, Polychoric factor analysis method has been used for best threshold and scoring. Some of important question like ‘How would you rate your overall quality of life’ on Likert scale to measure the quality of life, their earnings, education, family size, living condition, household assets, media exposure, health expenditure, treatment seeking behavior and food habits etc. Results from the study revealed that around one third of tannery workers had severe mental health problems then non-tannery workers. Mental health problem shown the statistically significant association with wealth quintile, 56 percent tannery workers had severe mental health problem those belong to medium wealth quintile. And 42 percent tannery workers had moderate mental health problem among those from the low wealth quintile. Work related stress scale found the statistically significant results for tannery workers. Large proportion of tannery and non-tannery workers reported they are unable to meet their basic needs from their earnings and living in worst condition. Important result from the study, tannery workers who were involved in beam house work in tannery (58%) had severe mental health problem. This study found the statistically significant association with tannery work and mental health problem among tannery workers.Keywords: GHQ-12, mental well-being, factor analysis, quality of life, tannery workers
Procedia PDF Downloads 388439 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology
Authors: Mahdi Farajzadeh Ajirlou
Abstract:
Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter
Procedia PDF Downloads 43438 Fabrication of Electrospun Green Fluorescent Protein Nano-Fibers for Biomedical Applications
Authors: Yakup Ulusu, Faruk Ozel, Numan Eczacioglu, Abdurrahman Ozen, Sabriye Acikgoz
Abstract:
GFP discovered in the mid-1970s, has been used as a marker after replicated genetic study by scientists. In biotechnology, cell, molecular biology, the GFP gene is frequently used as a reporter of expression. In modified forms, it has been used to make biosensors. Many animals have been created that express GFP as an evidence that a gene can be expressed throughout a given organism. Proteins labeled with GFP identified locations are determined. And so, cell connections can be monitored, gene expression can be reported, protein-protein interactions can be observed and signals that create events can be detected. Additionally, monitoring GFP is noninvasive; it can be detected by under UV-light because of simply generating fluorescence. Moreover, GFP is a relatively small and inert molecule, that does not seem to treat any biological processes of interest. The synthesis of GFP has some steps like, to construct the plasmid system, transformation in E. coli, production and purification of protein. GFP carrying plasmid vector pBAD–GFPuv was digested using two different restriction endonuclease enzymes (NheI and Eco RI) and DNA fragment of GFP was gel purified before cloning. The GFP-encoding DNA fragment was ligated into pET28a plasmid using NheI and Eco RI restriction sites. The final plasmid was named pETGFP and DNA sequencing of this plasmid indicated that the hexa histidine-tagged GFP was correctly inserted. Histidine-tagged GFP was expressed in an Escherichia coli BL21 DE3 (pLysE) strain. The strain was transformed with pETGFP plasmid and grown on LuiraBertoni (LB) plates with kanamycin and chloramphenicol selection. E. coli cells were grown up to an optical density (OD 600) of 0.8 and induced by the addition of a final concentration of 1mM isopropyl-thiogalactopyranoside (IPTG) and then grown for additional 4 h. The amino-terminal hexa-histidine-tag facilitated purification of the GFP by using a His Bind affinity chromatography resin (Novagen). Purity of GFP protein was analyzed by a 12 % sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). The concentration of protein was determined by UV absorption at 280 nm (Varian Cary 50 Scan UV/VIS spectrophotometer). Synthesis of GFP-Polymer composite nanofibers was produced by using GFP solution (10mg/mL) and polymer precursor Polyvinylpyrrolidone, (PVP, Mw=1300000) as starting materials and template, respectively. For the fabrication of nanofibers with the different fiber diameter; a sol–gel solution comprising of 0.40, 0.60 and 0.80 g PVP (depending upon the desired fiber diameter) and 100 mg GFP in 10 mL water: ethanol (3:2) mixtures were prepared and then the solution was covered on collecting plate via electro spinning at 10 kV with a feed-rate of 0.25 mL h-1 using Spellman electro spinning system. Results show that GFP-based nano-fiber can be used plenty of biomedical applications such as bio-imaging, bio-mechanic, bio-material and tissue engineering.Keywords: biomaterial, GFP, nano-fibers, protein expression
Procedia PDF Downloads 320437 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 107436 The Effect of Degraded Shock Absorbers on the Safety-Critical Tipping and Rolling Behaviour of Passenger Cars
Authors: Tobias Schramm, Günther Prokop
Abstract:
In Germany, the number of road fatalities has been falling since 2010 at a more moderate rate than before. At the same time, the average age of all registered passenger cars in Germany is rising continuously. Studies show that there is a correlation between the age and mileage of passenger cars and the degradation of their chassis components. Various studies show that degraded shock absorbers increase the braking distance of passenger cars and have a negative impact on driving stability. The exact effect of degraded vehicle shock absorbers on road safety is still the subject of research. A shock absorber examination as part of the periodic technical inspection is only mandatory in very few countries. In Germany, there is as yet no requirement for such a shock absorber examination. More comprehensive findings on the effect of degraded shock absorbers on the safety-critical driving dynamics of passenger cars can provide further arguments for the introduction of mandatory shock absorber testing as part of the periodic technical inspection. The specific effect chains of untripped rollover accidents are also still the subject of research. However, current research results show that the high proportion of sport utility vehicles in the vehicle field significantly increases the probability of untripped rollover accidents. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers on the safety-critical tipping and rolling behaviour of passenger cars, which can lead to untripped rollover accidents. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was validated with steering wheel angle sinus sweep driving maneuvers. The model was then used to simulate steering wheel angle sine and fishhook maneuvers, which investigate the safety-critical tipping and rolling behavior of passenger cars. The simulations were carried out in a realistic parameter space in order to demonstrate the effect of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the tipping and rolling behavior of all passenger cars. Shock absorber degradation leads to a significant increase in the observed roll angles, particularly in the range of the roll natural frequency. This superelevation has a negative effect on the wheel load distribution during the driving maneuvers investigated. In particular, the height of the vehicle's center of gravity and the stabilizer stiffness of the vehicles has a major influence on the effect of degraded shock absorbers on the overturning and rolling behaviour of passenger cars.Keywords: numerical simulation, safety-critical driving dynamics, suspension degradation, tipping and rolling behavior of passenger cars, vehicle shock absorber
Procedia PDF Downloads 17435 An Evidence-Based Laboratory Medicine (EBLM) Test to Help Doctors in the Assessment of the Pancreatic Endocrine Function
Authors: Sergio J. Calleja, Adria Roca, José D. Santotoribio
Abstract:
Pancreatic endocrine diseases include pathologies like insulin resistance (IR), prediabetes, and type 2 diabetes mellitus (DM2). Some of them are highly prevalent in the U.S.—40% of U.S. adults have IR, 38% of U.S. adults have prediabetes, and 12% of U.S. adults have DM2—, as reported by the National Center for Biotechnology Information (NCBI). Building upon this imperative, the objective of the present study was to develop a non-invasive test for the assessment of the patient’s pancreatic endocrine function and to evaluate its accuracy in detecting various pancreatic endocrine diseases, such as IR, prediabetes, and DM2. This approach to a routine blood and urine test is based around serum and urine biomarkers. It is made by the combination of several independent public algorithms, such as the Adult Treatment Panel III (ATP-III), triglycerides and glucose (TyG) index, homeostasis model assessment-insulin resistance (HOMA-IR), HOMA-2, and the quantitative insulin-sensitivity check index (QUICKI). Additionally, it incorporates essential measurements such as the creatinine clearance, estimated glomerular filtration rate (eGFR), urine albumin-to-creatinine ratio (ACR), and urinalysis, which are helpful to achieve a full image of the patient’s pancreatic endocrine disease. To evaluate the estimated accuracy of this test, an iterative process was performed by a machine learning (ML) algorithm, with a training set of 9,391 patients. The sensitivity achieved was 97.98% and the specificity was 99.13%. Consequently, the area under the receiver operating characteristic (AUROC) curve, the positive predictive value (PPV), and the negative predictive value (NPV) were 92.48%, 99.12%, and 98.00%, respectively. The algorithm was validated with a randomized controlled trial (RCT) with a target sample size (n) of 314 patients. However, 50 patients were initially excluded from the study, because they had ongoing clinically diagnosed pathologies, symptoms or signs, so the n dropped to 264 patients. Then, 110 patients were excluded because they didn’t show up at the clinical facility for any of the follow-up visits—this is a critical point to improve for the upcoming RCT, since the cost of each patient is very high and for this RCT almost a third of the patients already tested were lost—, so the new n consisted of 154 patients. After that, 2 patients were excluded, because some of their laboratory parameters and/or clinical information were wrong or incorrect. Thus, a final n of 152 patients was achieved. In this validation set, the results obtained were: 100.00% sensitivity, 100.00% specificity, 100.00% AUROC, 100.00% PPV, and 100.00% NPV. These results suggest that this approach to a routine blood and urine test holds promise in providing timely and accurate diagnoses of pancreatic endocrine diseases, particularly among individuals aged 40 and above. Given the current epidemiological state of these type of diseases, these findings underscore the significance of early detection. Furthermore, they advocate for further exploration, prompting the intention to conduct a clinical trial involving 26,000 participants (from March 2025 to December 2026).Keywords: algorithm, diabetes, laboratory medicine, non-invasive
Procedia PDF Downloads 34434 Developing Three-Dimensional Digital Image Correlation Method to Detect the Crack Variation at the Joint of Weld Steel Plate
Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung
Abstract:
The purposes of hydraulic gate are to maintain the functions of storing and draining water. It bears long-term hydraulic pressure and earthquake force and is very important for reservoir and waterpower plant. The high tensile strength of steel plate is used as constructional material of hydraulic gate. The cracks and rusts, induced by the defects of material, bad construction and seismic excitation and under water respectively, thus, the mechanics phenomena of gate with crack are probing into the cause of stress concentration, induced high crack increase rate, affect the safety and usage of hydroelectric power plant. Stress distribution analysis is a very important and essential surveying technique to analyze bi-material and singular point problems. The finite difference infinitely small element method has been demonstrated, suitable for analyzing the buckling phenomena of welding seam and steel plate with crack. Especially, this method can easily analyze the singularity of kink crack. Nevertheless, the construction form and deformation shape of some gates are three-dimensional system. Therefore, the three-dimensional Digital Image Correlation (DIC) has been developed and applied to analyze the strain variation of steel plate with crack at weld joint. The proposed Digital image correlation (DIC) technique is an only non-contact method for measuring the variation of test object. According to rapid development of digital camera, the cost of this digital image correlation technique has been reduced. Otherwise, this DIC method provides with the advantages of widely practical application of indoor test and field test without the restriction on the size of test object. Thus, the research purpose of this research is to develop and apply this technique to monitor mechanics crack variations of weld steel hydraulic gate and its conformation under action of loading. The imagines can be picked from real time monitoring process to analyze the strain change of each loading stage. The proposed 3-Dimensional digital image correlation method, developed in the study, is applied to analyze the post-buckling phenomenon and buckling tendency of welded steel plate with crack. Then, the stress intensity of 3-dimensional analysis of different materials and enhanced materials in steel plate has been analyzed in this paper. The test results show that this proposed three-dimensional DIC method can precisely detect the crack variation of welded steel plate under different loading stages. Especially, this proposed DIC method can detect and identify the crack position and the other flaws of the welded steel plate that the traditional test methods hardly detect these kind phenomena. Therefore, this proposed three-dimensional DIC method can apply to observe the mechanics phenomena of composite materials subjected to loading and operating.Keywords: welded steel plate, crack variation, three-dimensional digital image correlation (DIC), crack stel plate
Procedia PDF Downloads 520433 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy
Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay
Abstract:
Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.Keywords: trauma, coagulopathy, prediction, model
Procedia PDF Downloads 176432 Construction of a Dynamic Migration Model of Extracellular Fluid in Brain for Future Integrated Control of Brain State
Authors: Tomohiko Utsuki, Kyoka Sato
Abstract:
In emergency medicine, it is recognized that brain resuscitation is very important for the reduction of mortality rate and neurological sequelae. Especially, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) are most required for stabilizing brain’s physiological state in the treatment for such as brain injury, stroke, and encephalopathy. However, the manual control of BT, ICP, and CBF frequently requires the decision and operation of medical staff, relevant to medication and the setting of therapeutic apparatus. Thus, the integration and the automation of the control of those is very effective for not only improving therapeutic effect but also reducing staff burden and medical cost. For realizing such integration and automation, a mathematical model of brain physiological state is necessary as the controlled object in simulations, because the performance test of a prototype of the control system using patients is not ethically allowed. A model of cerebral blood circulation has already been constructed, which is the most basic part of brain physiological state. Also, a migration model of extracellular fluid in brain has been constructed, however the condition that the total volume of intracranial cavity is almost changeless due to the hardness of cranial bone has not been considered in that model. Therefore, in this research, the dynamic migration model of extracellular fluid in brain was constructed on the consideration of the changelessness of intracranial cavity’s total volume. This model is connectable to the cerebral blood circulation model. The constructed model consists of fourteen compartments, twelve of which corresponds to perfused area of bilateral anterior, middle and posterior cerebral arteries, the others corresponds to cerebral ventricles and subarachnoid space. This model enable to calculate the migration of tissue fluid from capillaries to gray matter and white matter, the flow of tissue fluid between compartments, the production and absorption of cerebrospinal fluid at choroid plexus and arachnoid granulation, and the production of metabolic water. Further, the volume, the colloid concentration, and the tissue pressure of/in each compartment are also calculable by solving 40-dimensional non-linear simultaneous differential equations. In this research, the obtained model was analyzed for its validation under the four condition of a normal adult, an adult with higher cerebral capillary pressure, an adult with lower cerebral capillary pressure, and an adult with lower colloid concentration in cerebral capillary. In the result, calculated fluid flow, tissue volume, colloid concentration, and tissue pressure were all converged to suitable value for the set condition within 60 minutes at a maximum. Also, because these results were not conflict with prior knowledge, it is certain that the model can enough represent physiological state of brain under such limited conditions at least. One of next challenges is to integrate this model and the already constructed cerebral blood circulation model. This modification enable to simulate CBF and ICP more precisely due to calculating the effect of blood pressure change to extracellular fluid migration and that of ICP change to CBF.Keywords: dynamic model, cerebral extracellular migration, brain resuscitation, automatic control
Procedia PDF Downloads 157431 Energy Refurbishment of University Building in Cold Italian Climate: Energy Audit and Performance Optimization
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The Directive 2010/31/EC 'Directive of the European Parliament and of the Council of 19 may 2010 on the energy performance of buildings' moved the targets of the previous version toward more ambitious targets, for instance by establishing that, by 31 December 2020, all new buildings should demand nearly zero-energy. Moreover, the demonstrative role of public buildings is strongly affirmed so that also the target nearly zero-energy buildings is anticipated, in January 2019. On the other hand, given the very low turn-over rate of buildings (in Europe, it ranges between 1-3%/yearly), each policy that does not consider the renovation of the existing building stock cannot be effective in the short and medium periods. According to this proposal, the study provides a novel, holistic approach to design the refurbishment of educational buildings in colder cities of Mediterranean regions enabling stakeholders to understand the uncertainty to use numerical modelling and the real environmental and economic impacts of adopting some energy efficiency technologies. The case study is a university building of Molise region in the centre of Italy. The proposed approach is based on the application of the cost-optimal methodology as it is shown in the Delegate Regulation 244/2012 and Guidelines of the European Commission, for evaluating the cost-optimal level of energy performance with a macroeconomic approach. This means that the refurbishment scenario should correspond to the configuration that leads to lowest global cost during the estimated economic life-cycle, taking into account not only the investment cost but also the operational costs, linked to energy consumption and polluting emissions. The definition of the reference building has been supported by various in-situ surveys, investigations, evaluations of the indoor comfort. Data collection can be divided into five categories: 1) geometrical features; 2) building envelope audit; 3) technical system and equipment characterization; 4) building use and thermal zones definition; 5) energy building data. For each category, the required measures have been indicated with some suggestions for the identifications of spatial distribution and timing of the measurements. With reference to the case study, the collected data, together with a comparison with energy bills, allowed a proper calibration of a numerical model suitable for the hourly energy simulation by means of EnergyPlus. Around 30 measures/packages of energy, efficiency measure has been taken into account both on the envelope than regarding plant systems. Starting from results, two-point will be examined exhaustively: (i) the importance to use validated models to simulate the present performance of building under investigation; (ii) the environmental benefits and the economic implications of a deep energy refurbishment of the educational building in cold climates.Keywords: energy simulation, modelling calibration, cost-optimal retrofit, university building
Procedia PDF Downloads 181430 Rheological Characterization of Polysaccharide Extracted from Camelina Meal as a New Source of Thickening Agent
Authors: Mohammad Anvari, Helen S. Joyner (Melito)
Abstract:
Camelina sativa (L.) Crantz is an oilseed crop currently used for the production of biofuels. However, the low price of diesel and gasoline has made camelina an unprofitable crop for farmers, leading to declining camelina production in the US. Hence, the ability to utilize camelina byproduct (defatted meal) after oil extraction would be a pivotal factor for promoting the economic value of the plant. Camelina defatted meal is rich in proteins and polysaccharides. The great diversity in the polysaccharide structural features provides a unique opportunity for use in food formulations as thickeners, gelling agents, emulsifiers, and stabilizers. There is currently a great degree of interest in the study of novel plant polysaccharides, as they can be derived from readily accessible sources and have potential application in a wide range of food formulations. However, there are no published studies on the polysaccharide extracted from camelina meal, and its potential industrial applications remain largely underexploited. Rheological properties are a key functional feature of polysaccharides and are highly dependent on the material composition and molecular structure. Therefore, the objective of this study was to evaluate the rheological properties of the polysaccharide extracted from camelina meal at different conditions to obtain insight on the molecular characteristics of the polysaccharide. Flow and dynamic mechanical behaviors were determined under different temperatures (5-50°C) and concentrations (1-6% w/v). Additionally, the zeta potential of the polysaccharide dispersion was measured at different pHs (2-11) and a biopolymer concentration of 0.05% (w/v). Shear rate sweep data revealed that the camelina polysaccharide displayed shear thinning (pseudoplastic) behavior, which is typical of polymer systems. The polysaccharide dispersion (1% w/v) showed no significant changes in viscosity with temperature, which makes it a promising ingredient in products requiring texture stability over a range of temperatures. However, the viscosity increased significantly with increased concentration, indicating that camelina polysaccharide can be used in food products at different concentrations to produce a range of textures. Dynamic mechanical spectra showed similar trends. The temperature had little effect on viscoelastic moduli. However, moduli were strongly affected by concentration: samples exhibited concentrated solution behavior at low concentrations (1-2% w/v) and weak gel behavior at higher concentrations (4-6% w/v). These rheological properties can be used for designing and modeling of liquid and semisolid products. Zeta potential affects the intensity of molecular interactions and molecular conformation and can alter solubility, stability, and eventually, the functionality of the materials as their environment changes. In this study, the zeta potential value significantly decreased from 0.0 to -62.5 as pH increased from 2 to 11, indicating that pH may affect the functional properties of the polysaccharide. The results obtained in the current study showed that camelina polysaccharide has significant potential for application in various food systems and can be introduced as a novel anionic thickening agent with unique properties.Keywords: Camelina meal, polysaccharide, rheology, zeta potential
Procedia PDF Downloads 245429 Simulation of Hydraulic Fracturing Fluid Cleanup for Partially Degraded Fracturing Fluids in Unconventional Gas Reservoirs
Authors: Regina A. Tayong, Reza Barati
Abstract:
A stable, fast and robust three-phase, 2D IMPES simulator has been developed for assessing the influence of; breaker concentration on yield stress of filter cake and broken gel viscosity, varying polymer concentration/yield stress along the fracture face, fracture conductivity, fracture length, capillary pressure changes and formation damage on fracturing fluid cleanup in tight gas reservoirs. This model has been validated as against field data reported in the literature for the same reservoir. A 2-D, two-phase (gas/water) fracture propagation model is used to model our invasion zone and create the initial conditions for our clean-up model by distributing 200 bbls of water around the fracture. A 2-D, three-phase IMPES simulator, incorporating a yield-power-law-rheology has been developed in MATLAB to characterize fluid flow through a hydraulically fractured grid. The variation in polymer concentration along the fracture is computed from a material balance equation relating the initial polymer concentration to total volume of injected fluid and fracture volume. All governing equations and the methods employed have been adequately reported to permit easy replication of results. The effect of increasing capillary pressure in the formation simulated in this study resulted in a 10.4% decrease in cumulative production after 100 days of fluid recovery. Increasing the breaker concentration from 5-15 gal/Mgal on the yield stress and fluid viscosity of a 200 lb/Mgal guar fluid resulted in a 10.83% increase in cumulative gas production. For tight gas formations (k=0.05 md), fluid recovery increases with increasing shut-in time, increasing fracture conductivity and fracture length, irrespective of the yield stress of the fracturing fluid. Mechanical induced formation damage combined with hydraulic damage tends to be the most significant. Several correlations have been developed relating pressure distribution and polymer concentration to distance along the fracture face and average polymer concentration variation with injection time. The gradient in yield stress distribution along the fracture face becomes steeper with increasing polymer concentration. The rate at which the yield stress (τ_o) is increasing is found to be proportional to the square of the volume of fluid lost to the formation. Finally, an improvement on previous results was achieved through simulating yield stress variation along the fracture face rather than assuming constant values because fluid loss to the formation and the polymer concentration distribution along the fracture face decreases as we move away from the injection well. The novelty of this three-phase flow model lies in its ability to (i) Simulate yield stress variation with fluid loss volume along the fracture face for different initial guar concentrations. (ii) Simulate increasing breaker activity on yield stress and broken gel viscosity and the effect of (i) and (ii) on cumulative gas production within reasonable computational time.Keywords: formation damage, hydraulic fracturing, polymer cleanup, multiphase flow numerical simulation
Procedia PDF Downloads 132428 Methodology for Risk Assessment of Nitrosamine Drug Substance Related Impurities in Glipizide Antidiabetic Formulations
Authors: Ravisinh Solanki, Ravi Patel, Chhaganbhai Patel
Abstract:
Purpose: The purpose of this study is to develop a methodology for the risk assessment and evaluation of nitrosamine impurities in Glipizide antidiabetic formulations. Nitroso compounds, including nitrosamines, have emerged as significant concerns in drug products, as highlighted by the ICH M7 guidelines. This study aims to identify known and potential sources of nitrosamine impurities that may contaminate Glipizide formulations and assess their presence. By determining observed or predicted levels of these impurities and comparing them with regulatory guidance, this research will contribute to ensuring the safety and quality of combination antidiabetic drug products on the market. Factors contributing to the presence of genotoxic nitrosamine contaminants in glipizide medications, such as secondary and tertiary amines, and nitroso group-complex forming molecules, will be investigated. Additionally, conditions necessary for nitrosamine formation, including the presence of nitrosating agents, and acidic environments, will be examined to enhance understanding and mitigation strategies. Method: The methodology for the study involves the implementation of the N-Nitroso Acid Precursor (NAP) test, as recommended by the WHO in 1978 and detailed in the 1980 International Agency for Research on Cancer monograph. Individual glass vials containing equivalent to 10mM quantities of Glipizide is prepared. These compounds are dissolved in an acidic environment and supplemented with 40 mM NaNO2. The resulting solutions are maintained at a temperature of 37°C for a duration of 4 hours. For the analysis of the samples, an HPLC method is employed for fit-for-purpose separation. LC resolution is achieved using a step gradient on an Agilent Eclipse Plus C18 column (4.6 X 100 mm, 3.5µ). Mobile phases A and B consist of 0.1% v/v formic acid in water and acetonitrile, respectively, following a gradient mode program. The flow rate is set at 0.6 mL/min, and the column compartment temperature is maintained at 35°C. Detection is performed using a PDA detector within the wavelength range of 190-400 nm. To determine the exact mass of formed nitrosamine drug substance related impurities (NDSRIs), the HPLC method is transferred to LC-TQ-MS/MS with the same mobile phase composition and gradient program. The injection volume is set at 5 µL, and MS analysis is conducted in Electrospray Ionization (ESI) mode within the mass range of 100−1000 Daltons. Results: The samples of NAP test were prepared according to the protocol. The samples were analyzed using HPLC and LC-TQ-MS/MS identify possible NDSRIs generated in different formulations of glipizide. It was found that the NAP test generated a various NDSRIs. The new finding, which has not been reported yet, discovered contamination of Glipizide. These NDSRIs are categorised based on the predicted carcinogenic potency and recommended its acceptable intact in medicines. The analytical method was found specific and reproducible.Keywords: NDSRI, nitrosamine impurities, antidiabetic, glipizide, LC-MS/MS
Procedia PDF Downloads 37427 Optimizing the Effectiveness of Docetaxel with Solid Lipid Nanoparticles: Formulation, Characterization, in Vitro and in Vivo Assessment
Authors: Navid Mosallaei, Mahmoud Reza Jaafari, Mohammad Yahya Hanafi-Bojd, Shiva Golmohammadzadeh, Bizhan Malaekeh-Nikouei
Abstract:
Background: Docetaxel (DTX), a potent anticancer drug derived from the European yew tree, is effective against various human cancers by inhibiting microtubule depolymerization. Solid lipid nanoparticles (SLNs) have gained attention as drug carriers for enhancing drug effectiveness and safety. SLNs, submicron-sized lipid-based particles, can passively target tumors through the "enhanced permeability and retention" (EPR) effect, providing stability, drug protection, and controlled release while being biocompatible. Methods: The SLN formulation included biodegradable lipids (Compritol and Precirol), hydrogenated soy phosphatidylcholine (H-SPC) as a lipophilic co-surfactant, and Poloxamer 188 as a non-ionic polymeric stabilizer. Two SLN preparation techniques, probe sonication and microemulsion, were assessed. Characterization encompassed SLNs' morphology, particle size, zeta potential, matrix, and encapsulation efficacy. In-vitro cytotoxicity and cellular uptake studies were conducted using mouse colorectal (C-26) and human malignant melanoma (A-375) cell lines, comparing SLN-DTX with Taxotere®. In-vivo studies evaluated tumor inhibitory efficacy and survival in mice with colorectal (C-26) tumors, comparing SLNDTX withTaxotere®. Results: SLN-DTX demonstrated stability, with an average size of 180 nm and a low polydispersity index (PDI) of 0.2 and encapsulation efficacy of 98.0 ± 0.1%. Differential scanning calorimetry (DSC) suggested amorphous encapsulation of DTX within SLNs. In vitro studies revealed that SLN-DTX exhibited nearly equivalent cytotoxicity to Taxotere®, depending on concentration and exposure time. Cellular uptake studies demonstrated superior intracellular DTX accumulation with SLN-DTX. In a C-26 mouse model, SLN-DTX at 10 mg/kg outperformed Taxotere® at 10 and 20 mg/kg, with no significant differences in body weight changes and a remarkably high survival rate of 60%. Conclusion: This study concludes that SLN-DTX, prepared using the probe sonication, offers stability and enhanced therapeutic effects. It displayed almost same in vitro cytotoxicity to Taxotere® but showed superior cellular uptake. In a mouse model, SLN-DTX effectively inhibited tumor growth, with 10 mg/kg outperforming even 20 mg/kg of Taxotere®, without adverse body weight changes and with higher survival rates. This suggests that SLN-DTX has the potential to reduce adverse effects while maintaining or enhancing docetaxel's therapeutic profile, making it a promising drug delivery strategy suitable for industrialization.Keywords: docetaxel, Taxotere®, solid lipid nanoparticles, enhanced permeability and retention effect, drug delivery, cancer chemotherapy, cytotoxicity, cellular uptake, tumor inhibition
Procedia PDF Downloads 83426 Research on Internet Attention of Tourism and Marketing Strategy in Northeast Sichuan Economic Zone in China Based on Baidu Index
Authors: Chuanqiao Zheng, Wei Zeng, Haozhen Lin
Abstract:
As of March 2020, the number of Chinese netizens has reached 904 million. The proportion of Internet users accessing the Internet through mobile phones is as high as 99.3%. Under the background of 'Internet +', tourists have a stronger sense of independence in the choice of tourism destinations and tourism products. Tourists are more inclined to learn about the relevant information on tourism destinations and other tourists' evaluations of tourist products through the Internet. The search engine, as an integrated platform that contains a wealth of information, is highly valuable to the analysis of the characteristics of the Internet attention given to various tourism destinations, through big data mining and analysis. This article uses the Baidu Index as the data source, which is one of the products of Baidu Search. The Baidu Index is based on big data, which collects and shares the search results of a large number of Internet users on the Baidu search engine. The big data used in this article includes search index, demand map, population profile, etc. The main research methods used are: (1) based on the search index, analyzing the Internet attention given to the tourism in five cities in Northeast Sichuan at different times, so as to obtain the overall trend and individual characteristics of tourism development in the region; (2) based on the demand map and the population profile, analyzing the demographic characteristics and market positioning of the tourist groups in these cities to understand the characteristics and needs of the target groups; (3) correlating the Internet attention data with the permanent population of each province in China in the corresponding to construct the Boston matrix of the Internet attention rate of the Northeast Sichuan tourism, obtain the tourism target markets, and then propose development strategies for different markets. The study has found that: a) the Internet attention given to the tourism in the region can be categorized into tourist off-season and peak season; the Internet attention given to tourism in different cities is quite different. b) tourists look for information including tour guide information, ticket information, traffic information, weather information, and information on the competing tourism cities; with regard to the population profile, the main group of potential tourists searching for the keywords of tourism in the five prefecture-level cities in Northeast Sichuan are youth. The male to female ratio is about 6 to 4, with males being predominant. c) through the construction of the Boston matrix, it is concluded that the star market for tourism in the Northeast Sichuan Economic Zone includes Sichuan and Shaanxi; the cash cows market includes Hainan and Ningxia; the question market includes Jiangsu and Shanghai; the dog market includes Hubei and Jiangxi. The study concludes with the following planning strategies and recommendations: i) creating a diversified business format that integrates cultural and tourism; ii) creating a brand image of niche tourism; iii) focusing on the development of tourism products; iv) innovating composite three-dimensional marketing channels.Keywords: Baidu Index, big data, internet attention, tourism
Procedia PDF Downloads 123425 Network Impact of a Social Innovation Initiative in Rural Areas of Southern Italy
Authors: A. M. Andriano, M. Lombardi, A. Lopolito, M. Prosperi, A. Stasi, E. Iannuzzi
Abstract:
In according to the scientific debate on the definition of Social Innovation (SI), the present paper identifies SI as new ideas (products, services, and models) that simultaneously meet social needs and create new social relationships or collaborations. This concept offers important tools to unravel the difficult condition for the agricultural sector in marginalized areas, characterized by the abandonment of activities, low level of farmer education, and low generational renewal, hampering new territorial strategies addressed at and integrated and sustainable development. Models of SI in agriculture, starting from bottom up approach or from the community, are considered to represent the driving force of an ecological and digital revolution. A system based on SI may be able to grasp and satisfy individual and social needs and to promote new forms of entrepreneurship. In this context, Vazapp ('Go Hoeing') is an emerging SI model in southern Italy that promotes solutions for satisfying needs of farmers and facilitates their relationships (creation of network). The Vazapp’s initiative, considered in this study, is the Contadinners ('Farmer’s dinners'), a dinner held at farmer’s house where stakeholders living in the surrounding area know each other and are able to build a network for possible future professional collaborations. The aim of the paper is to identify the evolution of farmers’ relationships, both quantitatively and qualitatively, because of the Contadinner’s initiative organized by Vazapp. To this end, the study adopts the Social Network Analysis (SNA) methodology by using UCINET (Version 6.667) software to analyze the relational structure. Data collection was realized through a questionnaire distributed to 387 participants in the twenty 'Contadinners', held from February 2016 to June 2018. The response rate to the survey was about 50% of farmers. The elaboration data was focused on different aspects, such as: a) the measurement of relational reciprocity among the farmers using the symmetrize method of answers; b) the measurement of the answer reliability using the dichotomize method; c) the description of evolution of social capital using the cohesion method; d) the clustering of the Contadinners' participants in followers and not-followers of Vazapp to evaluate its impact on the local social capital. The results concern the effectiveness of this initiative in generating trustworthy relationships within the rural area of southern Italy, typically affected by individualism and mistrust. The number of relationships represents the quantitative indicator to define the dimension of the network development; while the typologies of relationships (from simple friendship to formal collaborations, for branding new cooperation initiatives) represents the qualitative indicator that offers a diversified perspective of the network impact. From the analysis carried out, Vazapp’s initiative represents surely a virtuous SI model to catalyze the relationships within the rural areas and to develop entrepreneurship based on the real needs of the community. Procedia PDF Downloads 112424 Contraception in Guatemala, Panajachel and the Surrounding Areas: Barriers Affecting Women’s Contraceptive Usage
Authors: Natasha Bhate
Abstract:
Contraception is important in helping to reduce maternal and infant mortality rates by allowing women to control the number and spacing in-between their children. It also reduces the need for unsafe abortions. Women worldwide use contraception; however, the contraceptive prevalence rate is still relatively low in Central American countries like Guatemala. There is also an unmet need for contraception in Guatemala, which is more significant in rural, indigenous women due to barriers preventing contraceptive use. The study objective was to investigate and analyse the current barriers women face, in Guatemala, Panajachel and the surrounding areas, in using contraception, with a view of identifying ways to overcome these barriers. This included exploring the contraceptive barriers women believe exist and the influence of males in contraceptive decision making. The study took place at a charity in Panajachel, Guatemala, and had a cross-sectional, qualitative design to allow an in-depth understanding of information gathered. This particular study design was also chosen to help inform the charity with qualitative research analysis, in view of their intent to create a local reproductive health programme. A semi-structured interview design, including photo facilitation to improve cross-cultural communication, with interpreter assistance, was utilized. A pilot interview was initially conducted with small improvements required. Participants were recruited through purposive and convenience sampling. The study host at the charity acted as a gatekeeper; participants were identified through attendance of the charity’s women’s-initiative programme workshops. 20 participants were selected and agreed to study participation with two not attending; a total of 18 participants were interviewed in June 2017. Interviews were audio-recorded and data were stored on encrypted memory sticks. Framework analysis was used to analyse the data using NVivo11 software. The University of Leeds granted ethical approval for the research. Religion, language, the community, and fear of sickness were examples of existing contraceptive barrier themes recognized by many participants. The influence of men was also an important barrier identified, with themes of machismo and abuse preventing contraceptive use in some women. Women from more rural areas were believed to still face barriers which some participants did not encounter anymore, such as distance and affordability of contraceptives. Participants believed that informative workshops in various settings were an ideal method of overcoming existing contraceptive barriers and allowing women to be more empowered. The involvement of men in such workshops was also deemed important by participants to help reduce their negative influence in contraceptive usage. Overall, four recommendations following this study were made, including contraceptive educational courses, a gender equality campaign, couple-focused contraceptive workshops, and further qualitative research to gain a better insight into men’s opinions regarding women using contraception.Keywords: barrier, contraception, machismo, religion
Procedia PDF Downloads 128423 Evaluation of the Performance Measures of Two-Lane Roundabout and Turbo Roundabout with Varying Truck Percentages
Authors: Evangelos Kaisar, Anika Tabassum, Taraneh Ardalan, Majed Al-Ghandour
Abstract:
The economy of any country is dependent on its ability to accommodate the movement and delivery of goods. The demand for goods movement and services increases truck traffic on highways and inside the cities. The livability of most cities is directly affected by the congestion and environmental impacts of trucks, which are the backbone of the urban freight system. Better operation of heavy vehicles on highways and arterials could lead to the network’s efficiency and reliability. In many cases, roundabouts can respond better than at-level intersections to enable traffic operations with increased safety for both cars and heavy vehicles. Recently emerged, the concept of turbo-roundabout is a viable alternative to the two-lane roundabout aiming to improve traffic efficiency. The primary objective of this study is to evaluate the operation and performance level of an at-grade intersection, a conventional two-lane roundabout, and a basic turbo roundabout for freight movements. To analyze and evaluate the performances of the signalized intersections and the roundabouts, micro simulation models were developed PTV VISSIM. The networks chosen for this analysis in this study are to experiment and evaluate changes in the performance of the movement of vehicles with different geometric and flow scenarios. There are several scenarios that were examined when attempting to assess the impacts of various geometric designs on vehicle movements. The overall traffic efficiency depends on the geometric layout of the intersections, which consists of traffic congestion rate, hourly volume, frequency of heavy vehicles, type of road, and the ratio of major-street versus side-street traffic. The traffic performance was determined by evaluating the delay time, number of stops, and queue length of each intersection for varying truck percentages. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. More specifically, it is clear that two-lane roundabouts are seen to have shorter queue lengths compared to signalized intersections and turbo-roundabouts. For instance, considering the scenario where the volume is highest, and the truck movement and left turn movement are maximum, the signalized intersection has 3 times, and the turbo-roundabout has 5 times longer queue length than a two-lane roundabout in major roads. Similarly, on minor roads, signalized intersections and turbo-roundabouts have 11 times longer queue lengths than two-lane roundabouts for the same scenario. As explained from all the developed scenarios, while the traffic demand lowers, the queue lengths of turbo-roundabouts shorten. This proves that turbo roundabouts perform well for low and medium traffic demand. The results indicate that turbo-roundabouts can replace signalized intersections and two-lane roundabouts only when the traffic demand is low, even with high truck volume. Finally, this study provides recommendations on the conditions under which different intersections perform better than each other.Keywords: At-grade intersection, simulation, turbo-roundabout, two-lane roundabout
Procedia PDF Downloads 151422 Analysis of Lesotho Wool Production and Quality Trends 2008-2018
Authors: Papali Maqalika
Abstract:
Lesotho farmers produce significant quantities of Merino wool of a quality competitive on the global market and make a substantial impact on the economy of Lesotho. However, even with the economic contribution, the production and quality information and trends of this fibre has been recognised nor documented. This is a sombre shortcoming as Lesotho wool is unknown on international markets. The situation is worsened by the fact that Lesotho wool is auction together with South African wool, trading and benchmarking Lesotho wool are difficult not to mention attempts to advance its production and quality. Based on the information above, available data on Lesotho wool for 10 years were collected and analysed for trends to used in benchmarking where applicable. The fibre properties analysed include fibre diameter (fineness), vegetable matter and yield, application and price. These were selected because they are fundamental in determining fibre quality and price. Production of wool in Lesotho has increased slightly over the ten years covered by this study. It also became apparent that production and quality trends of Lesotho wool are greatly influenced by the farming practices, breed of sheep and climatic conditions. Greater adoption of the merino sheep breed, sheds/barns and sheep coats are suggested as ways to reduce mortality rate (due to extremely cold temperatures), to reduce the vegetable matter on the fibre thus improving the quality and increase yield per sheep and production as a whole. Some farming practices such as the lack of barns, supplementary feeding and veterinary care present constraints in wool production. The districts in the Highlands region were found to have the highest production of mostly wool, this being ascribed to better pastures, climatic, social and other conditions conducive to wool production. The production of Lesotho wool and its quality can be improved further, possibly because of the interventions the Ministry of Agriculture introduced through the Small Agricultural and Development Project (SADP) and other appropriate initiatives by the National Wool and Mohair Growers Association (NWMGA). The challenge however, remains the lack of direct involvement of the wool growers (farmers) in decisions making and policy development, this potentially influences and may lead to the reluctance to adopt the strategies. In some cases, the wool growers do not receive the benefits associated with the interventions immediately. Based on these discoveries; it is recommended that the relevant educators and researchers in wool and textile science, as well as the local wool farmers in Lesotho, be represented in policy and other decision making forums relating to these interventions. In this way, educational campaigns and training workshops will be demand driven with a better chance of adoption and success. This is because the direct beneficiaries will have been involved at inception and they will have a sense of ownership as well as intent to see them through successfully.Keywords: lesotho wool, wool quality, wool production, lesotho economy, global market, apparel wool, database, textile science, exports, animal farming practices, intimate apparel, interventions
Procedia PDF Downloads 97421 Emerging Issues for Global Impact of Foreign Institutional Investors (FII) on Indian Economy
Authors: Kamlesh Shashikant Dave
Abstract:
The global financial crisis is rooted in the sub-prime crisis in U.S.A. During the boom years, mortgage brokers attracted by the big commission, encouraged buyers with poor credit to accept housing mortgages with little or no down payment and without credit check. A combination of low interest rates and large inflow of foreign funds during the booming years helped the banks to create easy credit conditions for many years. Banks lent money on the assumptions that housing price would continue to rise. Also the real estate bubble encouraged the demand for houses as financial assets .Banks and financial institutions later repackaged these debts with other high risk debts and sold them to worldwide investors creating financial instruments called collateral debt obligations (CDOs). With the rise in interest rate, mortgage payments rose and defaults among the subprime category of borrowers increased accordingly. Through the securitization of mortgage payments, a recession developed in the housing sector and consequently it was transmitted to the entire US economy and rest of the world. The financial credit crisis has moved the US and the global economy into recession. Indian economy has also affected by the spill over effects of the global financial crisis. Great saving habit among people, strong fundamentals, strong conservative and regulatory regime have saved Indian economy from going out of gear, though significant parts of the economy have slowed down. Industrial activity, particularly in the manufacturing and infrastructure sectors decelerated. The service sector too, slow in construction, transport, trade, communication, hotels and restaurants sub sectors. The financial crisis has some adverse impact on the IT sector. Exports had declined in absolute terms in October. Higher inputs costs and dampened demand have dented corporate margins while the uncertainty surrounding the crisis has affected business confidence. To summarize, reckless subprime lending, loose monetary policy of US, expansion of financial derivatives beyond acceptable norms and greed of Wall Street has led to this exceptional global financial and economic crisis. Thus, the global credit crisis of 2008 highlights the need to redesign both the global and domestic financial regulatory systems not only to properly address systematic risk but also to support its proper functioning (i.e financial stability).Such design requires: 1) Well managed financial institutions with effective corporate governance and risk management system 2) Disclosure requirements sufficient to support market discipline. 3)Proper mechanisms for resolving problem institution and 4) Mechanisms to protect financial services consumers in the event of financial institutions failure.Keywords: FIIs, BSE, sensex, global impact
Procedia PDF Downloads 442420 Managing Human-Wildlife Conflicts Compensation Claims Data Collection and Payments Using a Scheme Administrator
Authors: Eric Mwenda, Shadrack Ngene
Abstract:
Human-wildlife conflicts (HWCs) are the main threat to conservation in Africa. This is because wildlife needs overlap with those of humans. In Kenya, about 70% of wildlife occurs outside protected areas. As a result, wildlife and human range overlap, causing HWCs. The HWCs in Kenya occur in the drylands adjacent to protected areas. The top five counties with the highest incidences of HWC are Taita Taveta, Narok, Lamu, Kajiado, and Laikipia. The common wildlife species responsible for HWCs are elephants, buffaloes, hyenas, hippos, leopards, baboons, monkeys, snakes, and crocodiles. To ensure individuals affected by the conflicts are compensated, Kenya has developed a model of HWC compensation claims data collection and payment. We collected data on HWC from all eight Kenya Wildlife Service (KWS) Conservation Areas from 2009 to 2019. Additional data was collected from stakeholders' consultative workshops held in the Conservation Areas and a literature review regarding payment of injuries and ongoing insurance schemes being practiced in areas. This was followed by the description of the claims administration process and calculation of the pricing of the compensation claims. We further developed a digital platform for data capture and processing of all reported conflict cases and payments. Our product recognized four categories of HWC (i.e., human death and injury, property damage, crop destruction, and livestock predation). Personal bodily injury and human death were provided based on the Continental Scale of Benefits. We proposed a maximum of Kenya Shillings (KES) 3,000,000 for death. Medical, pharmaceutical, and hospital expenses were capped at a maximum of KES 150,000, as well as funeral costs at KES 50,000. Pain and suffering were proposed to be paid for 12 months at the rate of KES 13,500 per month. Crop damage was to be based on farm input costs at a maximum of KES 150,000 per claim. Livestock predation leading to death was based on Tropical Livestock Unit (TLU), which is equivalent to KES 30,000, whick includes Cattle (1 TLU = KES 30,000), Camel (1.4 TLU = KES 42,000), Goat (0.15 TLU = 4,500), Sheep (0.15 TLU = 4,500), and Donkey (0.5 TLU = KES 15,000). Property destruction (buildings, outside structures and harvested crops) was capped at KES 150,000 per any one claim. We conclude that it is possible to use an administrator to collect data on HWC compensation claims and make payments using technology. The success of the new approach will depend on a piloting program. We recommended that a pilot scheme be initiated for eight months in Taita Taveta, Kajiado, Baringo, Laikipia, Narok, and Meru Counties. This will test the claims administration process as well as harmonize data collection methods. The results of this pilot will be crucial in adjusting the scheme before country-wide roll out.Keywords: human-wildlife conflicts, compensation, human death and injury, crop destruction, predation, property destruction
Procedia PDF Downloads 55419 The Impact of Economic Status on Health Status in the Context of Bangladesh
Authors: Md. S. Sabuz
Abstract:
Bangladesh, a South Asian developing country, has achieved a remarkable breakthrough in health indicators during the last four decades despite immense income inequality. This phenomenon results in the mystical exclusion of marginalized people from obtaining health care facilities. However, the persistence of exclusion of the disadvantaged remains troubling. Exclusion occurs from occupational inferiority, pay and wage differences, educational backwardness, gender disparity to urban-rural complexity and eliminate the unprivileged from seeking and availing the health services. Evidence from Bangladesh shows that many sick people prefer to die at home without securing medical services because in previous times they were not treated well, not because the medical facilities were inadequate or antediluvian but the socio-economic class allows them to receive obdurate treatment. Furthermore, government and policymakers have given enormous emphasis on infrastructural development and achieving health indicators instead of ensuring quality services and inclusiveness of people from all spheres. Therefore, it is high time to address the issues concerning this and highlight the impact of economic status on health status in a sociological perspective. The objective of this study is to consider ways of assessing and exploring the impact of economic status for instance: occupational status, pay and wage variable, on health status in the context of Bangladesh. The hypotheses are that there are a significant number of factors affecting economic status which are impactful for health status eventually, but acute income inequality is a prominent factor. Illiteracy, gender disparity, remoteness, incredibility on services, superior costs, superstition etc. are the dominant indicators behind the economic factors influencing the health status. The chosen methodologies are a qualitative and quantitative approaches to accomplish the research objectives. Secondary sources of data will be used to conduct the study. Surveys will be conducted on the people who have ever been through the health care facilities and people from the different socio-economic and cultural backgrounds. Focus group discussions will be conducted to acquire the data from different cultural and regional citizens. The findings show that 48% of people who are from disadvantaged communities have been deprived of proper health care facilities. The general reasons behind this are the higher cost of medicines and other equipment. A significant number of people are unaware of the appropriate facilities. It was found that the socio-economic variables are the main influential factors that work as the driving force for both economic dimension and health status. Above all regional variables and gender, dimensions have an enormous effect on determining the health status of an individual or community. Amidst many positive achievements for example decrease in the child mortality rate, an increase in the immunization programs of the child etc., the inclusiveness of all classes of people in health care facilities has been overshadowed in Bangladesh. However, this phenomenon along with the socio-economic and cultural phenomena significantly demolishes the quality and inclusiveness of the health status of people.Keywords: cultural context of health, economic status, gender and health, rural health care
Procedia PDF Downloads 213418 India’s Energy Transition, Pathways for Green Economy
Authors: B. Sudhakara Reddy
Abstract:
In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.Keywords: energy, renewables, green technology, scenario
Procedia PDF Downloads 250417 Strategies for Conserving Ecosystem Functions of the Aravalli Range to Combat Land Degradation: Case of Kishangarh and Tijara Tehsil in Rajasthan, India
Authors: Saloni Khandelwal
Abstract:
The Aravalli hills are one of the oldest and most distinctive mountain chains of peninsular India spanning in around 692 Km. More than 60% of it falls in the state of Rajasthan and influences ecological equilibrium in about 30% of the state. Because of natural and human-induced activities, physical gaps in the Aravallis are increasing, new gaps are coming up, and its physical structure is changing. There are no strict regulations to protect and monitor the Aravallis and no comprehensive research and study has been done for the enhancement of ecosystem functions of these ranges. Through this study, various factors leading to Aravalli’s degradation are identified and its impacts on selected areas are analyzed. A literature study is done to identify factors responsible for the degradation. To understand the severity of the problem at the lowest level, two tehsils from different districts in Rajasthan, which are the most affected due to illegal mining and increasing physical gaps are selected for the study. Case-1 of three-gram panchayats in Kishangarh Tehsil of Ajmer district focuses on the expanding physical gaps in the Aravalli range, and case-2 of three-gram panchayats in Tijara Tehsil of Alwar district focuses on increasing illegal mining in the Aravalli range. For measuring the degradation, physical, biological and social indicators are identified through literature review and for both the cases analysis is done on the basis of these indicators. Primary survey and focus group discussions are done with villagers, mining owners, illegal miners, and various government officials to understand dependency of people on the Aravalli and its importance to them along with the impact of degradation on their livelihood and environment. From the analysis, it has been found that green cover is continuously decreasing in both cases, dense forest areas do not exist now, the groundwater table is depleting at a very fast rate, soil is losing its moisture resulting in low yield and shift in agriculture. Wild animals which were easily seen earlier are now extinct. Cattles of villagers are dependent on the forest area in the Aravalli range for food, but with a decrease in fodder, their cattle numbers are decreasing. There is a decrease in agricultural land and an increase in scrub and salt-affected land. Analysis of various national and state programmes, acts which were passed to conserve biodiversity has been done showing that none of them is helping much to protect the Aravalli. For conserving the Aravalli and its forest areas, regional level and local level initiatives are required and are proposed in this study. This study is an attempt to formulate conservation and management strategies for the Aravalli range. These strategies will help in improving biodiversity which can lead to the revival of its ecosystem functions. It will also help in curbing the pollution at the regional and local level. All this will lead to the sustainable development of the region.Keywords: Aravalli, ecosystem, LULC, Rajasthan
Procedia PDF Downloads 137416 Carbon Aerogels with Tailored Porosity as Cathode in Li-Ion Capacitors
Authors: María Canal-Rodríguez, María Arnaiz, Natalia Rey-Raap, Ana Arenillas, Jon Ajuria
Abstract:
The constant demand of electrical energy, as well as the increase in environmental concern, lead to the necessity of investing in clean and eco-friendly energy sources that implies the development of enhanced energy storage devices. Li-ion batteries (LIBs) and Electrical double layer capacitors (EDLCs) are the most widespread energy systems. Batteries are able to storage high energy densities contrary to capacitors, which main strength is the high-power density supply and the long cycle life. The combination of both technologies gave rise to Li-ion capacitors (LICs), which offers all these advantages in a single device. This is achieved combining a capacitive, supercapacitor-like positive electrode with a faradaic, battery-like negative electrode. Due to the abundance and affordability, dual carbon-based LICs are nowadays the common technology. Normally, an Active Carbon (AC) is used as the EDLC like electrode, while graphite is the material commonly employed as anode. LICs are potential systems to be used in applications in which high energy and power densities are required, such us kinetic energy recovery systems. Although these devices are already in the market, some drawbacks like the limited power delivered by graphite or the energy limiting nature of AC must be solved to trigger their used. Focusing on the anode, one possibility could be to replace graphite with Hard Carbon (HC). The better rate capability of the latter increases the power performance of the device. Moreover, the disordered carbonaceous structure of HCs enables storage twice the theoretical capacity of graphite. With respect to the cathode, the ACs are characterized for their high volume of micropores, in which the charge is storage. Nevertheless, they normally do not show mesoporous, which are really important mainly at high C-rates as they act as transport channels for the ions to reach the micropores. Usually, the porosity of ACs cannot be tailored, as it strongly depends on the precursor employed to get the final carbon. Moreover, they are not characterized for having a high electrical conductivity, which is an important characteristic to get a good performance in energy storage applications. A possible candidate to substitute ACs are carbon aerogels (CAs). CAs are materials that combine a high porosity with great electrical conductivity, opposite characteristics in carbon materials. Furthermore, its porous properties can be tailored quite accurately according to with the requirements of the application. In the present study, CAs with controlled porosity were obtained from polymerization of resorcinol and formaldehyde by microwave heating. Varying the synthesis conditions, mainly the amount of precursors and pH of the precursor solution, carbons with different textural properties were obtained. The way the porous characteristics affect the performance of the cathode was studied by means of a half-cell configuration. The material with the best performance was evaluated as cathode in a LIC versus a hard carbon as anode. An analogous full LIC made by a high microporous commercial cathode was also assembled for comparison purposes.Keywords: li-ion capacitors, energy storage, tailored porosity, carbon aerogels
Procedia PDF Downloads 167415 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem
Authors: Nan Xu
Abstract:
In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC
Procedia PDF Downloads 147414 Is Liking for Sampled Energy-Dense Foods Mediated by Taste Phenotypes?
Authors: Gary J. Pickering, Sarah Lucas, Catherine E. Klodnicki, Nicole J. Gaudette
Abstract:
Two taste pheno types that are of interest in the study of habitual diet-related risk factors and disease are 6-n-propylthiouracil (PROP) responsiveness and thermal tasting. Individuals differ considerable in how intensely they experience the bitterness of PROP, which is partially explained by three major single nucleotide polymorphisms associated with the TAS2R38 gene. Importantly, this variable responsiveness is a useful proxy for general taste responsiveness, and links to diet-related disease risk, including body mass index, in some studies. Thermal tasting - a newly discovered taste phenotype independent of PROP responsiveness - refers to the capacity of many individuals to perceive phantom tastes in response to lingual thermal stimulation, and is linked with TRPM5 channels. Thermal tasters (TTs) also experience oral sensations more intensely than thermal non-tasters (TnTs), and this was shown to associate with differences in self-reported food preferences in a previous survey from our lab. Here we report on two related studies, where we sought to determine whether PROP responsiveness and thermal tasting would associate with perceptual differences in the oral sensations elicited by sampled energy-dense foods, and whether in turn this would influence liking. We hypothesized that hyper-tasters (thermal tasters and individuals who experience PROP intensely) would (a) rate sweet and high-fat foods more intensely than hypo-tasters, and (b) would differ from hypo-tasters in liking scores. (Liking has been proposed recently as a more accurate measure of actual food consumption). In Study 1, a range of energy-dense foods and beverages, including table cream and chocolate, was assessed by 25 TTs and 19 TnTs. Ratings of oral sensation intensity and overall liking were obtained using gVAS and gDOL scales, respectively. TTs and TnTs did not differ significantly in intensity ratings for most stimuli (ANOVA). In a 2nd study, 44 female participants sampled 22 foods and beverages, assessing them for intensity of oral sensations (gVAS) and overall liking (9-point hedonic scale). TTs (n=23) rated their overall liking of creaminess and milk products lower than did TnTs (n=21), and liked milk chocolate less. PROP responsiveness was negatively correlated with liking of food and beverages belonging to the sweet or sensory food grouping. No other differences in intensity or liking scores between hyper- and hypo-tasters were found. Taken overall, our results are somewhat unexpected, lending only modest support to the hypothesis that these taste phenotypes associate with energy-dense food liking and consumption through differences in the oral sensations they elicit. Reasons for this lack of concordance with expectations and some prior literature are discussed, and suggestions for future research are advanced.Keywords: taste phenotypes, sensory evaluation, PROP, thermal tasting, diet-related health risk
Procedia PDF Downloads 459413 Impact of Displacements Durations and Monetary Costs on the Labour Market within a City Consisting on Four Areas a Theoretical Approach
Authors: Aboulkacem El Mehdi
Abstract:
We develop a theoretical model at the crossroads of labour and urban economics, used for explaining the mechanism through which the duration of home-workplace trips and their monetary costs impact the labour demand and supply in a spatially scattered labour market and how they are impacted by a change in passenger transport infrastructures and services. The spatial disconnection between home and job opportunities is referred to as the spatial mismatch hypothesis (SMH). Its harmful impact on employment has been subject to numerous theoretical propositions. However, all the theoretical models proposed so far are patterned around the American context, which is particular as it is marked by racial discrimination against blacks in the housing and the labour markets. Therefore, it is only natural that most of these models are developed in order to reproduce a steady state characterized by agents carrying out their economic activities in a mono-centric city in which most unskilled jobs being created in the suburbs, far from the Blacks who dwell in the city-centre, generating a high unemployment rates for blacks, while the White population resides in the suburbs and has a low unemployment rate. Our model doesn't rely on any racial discrimination and doesn't aim at reproducing a steady state in which these stylized facts are replicated; it takes the main principle of the SMH -the spatial disconnection between homes and workplaces- as a starting point. One of the innovative aspects of the model consists in dealing with a SMH related issue at an aggregate level. We link the parameters of the passengers transport system to employment in the whole area of a city. We consider here a city that consists of four areas: two of them are residential areas with unemployed workers, the other two host firms looking for labour force. The workers compare the indirect utility of working in each area with the utility of unemployment and choose between submitting an application for the job that generate the highest indirect utility or not submitting. This arbitration takes account of the monetary and the time expenditures generated by the trips between the residency areas and the working areas. Each of these expenditures is clearly and explicitly formulated so that the impact of each of them can be studied separately than the impact of the other. The first findings show that the unemployed workers living in an area benefiting from good transport infrastructures and services have a better chance to prefer activity to unemployment and are more likely to supply a higher 'quantity' of labour than those who live in an area where the transport infrastructures and services are poorer. We also show that the firms located in the most accessible area receive much more applications and are more likely to hire the workers who provide the highest quantity of labour than the firms located in the less accessible area. Currently, we are working on the matching process between firms and job seekers and on how the equilibrium between the labour demand and supply occurs.Keywords: labour market, passenger transport infrastructure, spatial mismatch hypothesis, urban economics
Procedia PDF Downloads 294412 A Smart Sensor Network Approach Using Affordable River Water Level Sensors
Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan
Abstract:
Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.Keywords: smart sensing, internet of things, water level sensor, flooding
Procedia PDF Downloads 383411 Influence of Temperature and Immersion on the Behavior of a Polymer Composite
Authors: Quentin C.P. Bourgogne, Vanessa Bouchart, Pierre Chevrier, Emmanuel Dattoli
Abstract:
This study presents an experimental and theoretical work conducted on a PolyPhenylene Sulfide reinforced with 40%wt of short glass fibers (PPS GF40) and its matrix. Thermoplastics are widely used in the automotive industry to lightweight automotive parts. The replacement of metallic parts by thermoplastics is reaching under-the-hood parts, near the engine. In this area, the parts are subjected to high temperatures and are immersed in cooling liquid. This liquid is composed of water and glycol and can affect the mechanical properties of the composite. The aim of this work was thus to quantify the evolution of mechanical properties of the thermoplastic composite, as a function of temperature and liquid aging effects, in order to develop a reliable design of parts. An experimental campaign in the tensile mode was carried out at different temperatures and for various glycol proportions in the cooling liquid, for monotonic and cyclic loadings on a neat and a reinforced PPS. The results of these tests allowed to highlight some of the main physical phenomena occurring during these solicitations under tough hydro-thermal conditions. Indeed, the performed tests showed that temperature and liquid cooling aging can affect the mechanical behavior of the material in several ways. The more the cooling liquid contains water, the more the mechanical behavior is affected. It was observed that PPS showed a higher sensitivity to absorption than to chemical aggressiveness of the cooling liquid, explaining this dominant sensitivity. Two kinds of behaviors were noted: an elasto-plastic type under the glass transition temperature and a visco-pseudo-plastic one above it. It was also shown that viscosity is the leading phenomenon above the glass transition temperature for the PPS and could also be important under this temperature, mostly under cyclic conditions and when the stress rate is low. Finally, it was observed that soliciting this composite at high temperatures is decreasing the advantages of the presence of fibers. A new phenomenological model was then built to take into account these experimental observations. This new model allowed the prediction of the evolution of mechanical properties as a function of the loading environment, with a reduced number of parameters compared to precedent studies. It was also shown that the presented approach enables the description and the prediction of the mechanical response with very good accuracy (2% of average error at worst), over a wide range of hydrothermal conditions. A temperature-humidity equivalence principle was underlined for the PPS, allowing the consideration of aging effects within the proposed model. Then, a limit of improvement of the reachable accuracy was determinate for all models using this set of data by the application of an artificial intelligence-based model allowing a comparison between artificial intelligence-based models and phenomenological based ones.Keywords: aging, analytical modeling, mechanical testing, polymer matrix composites, sequential model, thermomechanical
Procedia PDF Downloads 117