Search results for: a modified estimation equation
773 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 170772 Recursion, Merge and Event Sequence: A Bio-Mathematical Perspective
Authors: Noury Bakrim
Abstract:
Formalization is indeed a foundational Mathematical Linguistics as demonstrated by the pioneering works. While dialoguing with this frame, we nonetheless propone, in our approach of language as a real object, a mathematical linguistics/biosemiotics defined as a dialectical synthesis between induction and computational deduction. Therefore, relying on the parametric interaction of cycles, rules, and features giving way to a sub-hypothetic biological point of view, we first hypothesize a factorial equation as an explanatory principle within Category Mathematics of the Ergobrain: our computation proposal of Universal Grammar rules per cycle or a scalar determination (multiplying right/left columns of the determinant matrix and right/left columns of the logarithmic matrix) of the transformable matrix for rule addition/deletion and cycles within representational mapping/cycle heredity basing on the factorial example, being the logarithmic exponent or power of rule deletion/addition. It enables us to propone an extension of minimalist merge/label notions to a Language Merge (as a computing principle) within cycle recursion relying on combinatorial mapping of rules hierarchies on external Entax of the Event Sequence. Therefore, to define combinatorial maps as language merge of features and combinatorial hierarchical restrictions (governing, commanding, and other rules), we secondly hypothesize from our results feature/hierarchy exponentiation on graph representation deriving from Gromov's Symbolic Dynamics where combinatorial vertices from Fe are set to combinatorial vertices of Hie and edges from Fe to Hie such as for all combinatorial group, there are restriction maps representing different derivational levels that are subgraphs: the intersection on I defines pullbacks and deletion rules (under restriction maps) then under disjunction edges H such that for the combinatorial map P belonging to Hie exponentiation by intersection there are pullbacks and projections that are equal to restriction maps RM₁ and RM₂. The model will draw on experimental biomathematics as well as structural frames with focus on Amazigh and English (cases from phonology/micro-semantics, Syntax) shift from Structure to event (especially Amazigh formant principle resolving its morphological heterogeneity).Keywords: rule/cycle addition/deletion, bio-mathematical methodology, general merge calculation, feature exponentiation, combinatorial maps, event sequence
Procedia PDF Downloads 129771 Still Hepatocellular Carcinoma Risk Despite Proper Treatment of Chronic Viral Hepatitis
Authors: Sila Akhan, Muge Toygar, Murat Sayan, Simge Fidan
Abstract:
Chronic viral hepatitis B, C, and D can cause hepatocellular carcinoma (HCC), cirrhosis and death. The proper treatment reduce the risk of development of HCC importantly, but not to zero point. Materials and Methods: We analysed retrospectively our chronic viral hepatitis B, C and D patients who attended to our Infectious Diseases policlinic between 2004-2018. From 589 biopsy-proven chronic hepatitis patients 3 have hepatocellular carcinoma on our follow up. First case is 74 years old patient. His HCV infection diagnosis was made 8 years ago. First treatment was pegylated interferon plus ribavirin only 28 weeks, because of HCV RNA breakthrough under treatment. In 2013 he was retreated with telaprevir, pegylated interferon plus ribavirin 24 weeks. But at the end of the therapy HCV RNA was found 1.290.000 IU/mL. He has abdominal ultrasonography (US) controls and alpha-fetoprotein (AFP) at 6 months intervals. All seemed normal until 2015 then he has an abdominal magnetic resonance imaging (MRI) and found HCC by chance. His treatment began in Oncology Clinic after verified with biopsy of HCC. And then sofosbuvir/ledipasvir was given to him for HCV 24 weeks. Sustained virologic response (SVR) was obtained. He is on cure for HCV infection and under control of Oncology for HCC. Second patient is 36 years old man. He knows his HBV infection since 2008. HBsAg and HBeAg positive; HDV RNA negative. Liver biopsy revealed grade:4, stage 3-4 according modified Knodell scoring system. In 2010 tenofovir treatment was began. His abdominal US and AFP were normal. His controls took place at 6 months intervals and HBV DNA negative, US, and AFP were normal until 2016 continuously. AFP found 37 above the normal range and then HCC was found in MRI. Third patient is 57 years old man. As hepatitis B infection was first diagnosed; he has cirrhosis and was began tenofovir as treatment. In short time he has HCC despite normal AFP values. Conclusion: In Mediterranian countries including Turkey naturally occurring pre-S/S variants are more than 75% of all chronic hepatitis B patients. This variants may contribute to the development of progressive liver damage and hepatocarcinogenesis. HCV-induced development of HCC is a gradual process and is affected by the duration of disease and viral genotype. All the chronic viral hepatitis patients should be followed up in 6 months intervals not only with US and AFP for HCC. Despite they have proper treatment there is always the risk development of HCC. Chronic hepatitis patients cannot be dropped from follow up even treated well. Procedia PDF Downloads 139770 Femoral Neck Anteversion and Neck-Shaft Angles: Determination and Their Clinical Implications in Fetuses of Different Gestational Ages
Authors: Vrinda Hari Ankolekar, Anne D. Souza, Mamatha Hosapatna
Abstract:
Introduction: Precise anatomical assessment of femoral neck anteversion (FNA) and the neck shaft angles (NSA) would be essential in diagnosing the pathological conditions involving hip joint and its ligaments. FNA of greater than 20 degrees is considered excessive femoral anteversion, whereas a torsion angle of fewer than 10 degrees is considered femoral retroversion. Excessive femoral torsion is not uncommon and has been associated with certain neurologic and orthopedic conditions. The enlargement and maturation of the hip joint increases at the 20th week of gestation and the NSA ranges from 135- 140◦ at birth. Material and methods: 48 femurs were tagged according to the GA and two photographs for each femur were taken using Nikon digital camera. Each femur was kept on a horizontal hard desk and end on an image of the upper end was taken for the estimation of FNA and a photograph in a perpendicular plane was taken to calculate the NSA. The images were transferred to the computer and were stored in TIFF format. Microsoft Paint software was used to mark the points and Image J software was used to calculate the angles digitally. 1. Calculation of FNA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. The angle made by this line with the horizontal plane was measured as FNA. 2. Calculation of NSA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. A vertical line was drawn passing through the tip of the greater trochanter to the inter-condylar notch. The angle formed by these lines was calculated as NSA. Results: The paired t-test for the inter-observer variability showed no significant difference between the values of two observers. (FNA: t=-1.06 and p=0.31; NSA: t=-0.09 and p=0.9). The FNA ranged from 17.08º to 33.97 º on right and 17.32 º to 45.08 º on left. The NSA ranged from 139.33 º to 124.91 º on right and 143.98 º to 123.8 º on left. Unpaired t-test was applied to compare the mean angles between the second and third trimesters which did not show any statistical significance. This shows that the FNA and NSA of femur did not vary significantly during the third trimester. The FNA and NSA were correlated with the GA using Pearson’s correlation. FNA appeared to increase with the GA (r=0.5) but the increase was not statistically significant. A decrease in the NSA was also noted with the GA (r=-0.3) which was also statistically not significant. Conclusion: The present study evaluates the FNA and NSA of the femur in fetuses and correlates their development with the GA during second and third trimesters. The FNA and NSA did not vary significantly during the third trimester.Keywords: anteversion, coxa antetorsa, femoral torsion, femur neck shaft angle
Procedia PDF Downloads 320769 Analysis and Design of Exo-Skeleton System Based on Multibody Dynamics
Authors: Jatin Gupta, Bishakh Bhattacharya
Abstract:
With the aging process, many people start suffering from the problem of weak limbs resulting in mobility disorders and loss of sensory and motor function of limbs. Wearable robotic devices are viable solutions to help people suffering from these issues by augmenting their strength. These robotic devices, popularly known as exoskeletons aides user by providing external power and controlling the dynamics so as to achieve desired motion. Present work studies a simplified dynamic model of the human gait. A four link open chain kinematic model is developed to describe the dynamics of Single Support Phase (SSP) of the human gait cycle. The dynamic model is developed integrating mathematical models of the motion of inverted and triple pendulums. Stance leg is modeled as inverted pendulum having single degree of freedom and swing leg as triple pendulum having three degrees of freedom viz. thigh, knee, and ankle joints. The kinematic model is formulated using forward kinematics approach. Lagrangian approach is used to formulate governing dynamic equation of the model. For a system of nonlinear differential equations, numerical method is employed to obtain system response. Reference trajectory is generated using human body simulator, LifeMOD. For optimal mechanical design and controller design of exoskeleton system, it is imperative to study parameter sensitivity of the system. Six different parameters viz. thigh, shank, and foot masses and lengths are varied from 85% to 115% of the original value for the present work. It is observed that hip joint of swing leg is the most sensitive and ankle joint of swing leg is the least sensitive one. Changing link lengths causes more deviation in system response than link masses. Also, shank length and thigh mass are most sensitive parameters. Finally, the present study gives an insight on different factors that should be considered while designing a lower extremity exoskeleton.Keywords: lower limb exoskeleton, multibody dynamics, energy based formulation, optimal design
Procedia PDF Downloads 202768 Oligoalkylamine Modified Poly(Amidoamine) Generation 4.5 Dendrimer for the Delivery of Small Interfering RNA
Authors: Endris Yibru Hanurry, Wei-Hsin Hsu, Hsieh-Chih Tsai
Abstract:
In recent years, the discovery of small interfering RNAs (siRNAs) has got great attention for the treatment of cancer and other diseases. However, the therapeutic efficacy of siRNAs has been faced with many drawbacks because of short half-life in blood circulation, poor membrane penetration, weak endosomal escape and inadequate release into the cytosol. To overcome these drawbacks, we designed a non-viral vector by conjugating polyamidoamine generation 4.5 dendrimer (PDG4.5) with diethylenetriamine (DETA)- and tetraethylenepentamine (TEPA) followed by binding with siRNA to form polyplexes through electrostatic interaction. The result of 1H nuclear magnetic resonance (NMR), 13C NMR, correlation spectroscopy, heteronuclear single–quantum correlation spectroscopy, and Fourier transform infrared spectroscopy confirmed the successful conjugation of DETA and TEPA with PDG4.5. Then, the size, surface charge, morphology, binding ability, stability, release assay, toxicity and cellular internalization were analyzed to explore the physicochemical and biological properties of PDG4.5-DETA and PDG4.5-TEPA polyplexes at specific N/P ratios. The polyplexes (N/P = 8) exhibited spherical nanosized (125 and 85 nm) particles with optimum surface charge (13 and 26 mV), showed strong siRNA binding ability, protected the siRNA against enzyme digestion and accepted biocompatibility to the HeLa cells. Qualitatively, the fluorescence microscopy image revealed the delocalization (Manders’ coefficient 0.63 and 0.53 for PDG4.5-DETA and PDG4.5-TEPA, respectively) of polyplexes and the translocation of the siRNA throughout the cytosol to show a decent cellular internalization and intracellular biodistribution of polyplexes in HeLa cells. Quantitatively, the flow cytometry result indicated that a significant (P < 0.05) amount of siRNA was internalized by cells treated with PDG4.5-DETA (68.5%) and PDG4.5-TEPA (73%) polyplexes. Generally, PDG4.5-DETA and PDG4.5-TEPA were ideal nanocarriers of siRNA in vitro and might be used as promising candidates for in vivo study and future pharmaceutical applications.Keywords: non-viral carrier, oligoalkylamine, poly(amidoamine) dendrimer, polyplexes, siRNA
Procedia PDF Downloads 132767 Effect of Curing Temperature on the Textural and Rheological of Gelatine-SDS Hydrogels
Authors: Virginia Martin Torrejon, Binjie Wu
Abstract:
Gelatine is a protein biopolymer obtained from the partial hydrolysis of animal tissues which contain collagen, the primary structural component in connective tissue. Gelatine hydrogels have attracted considerable research in recent years as an alternative to synthetic materials due to their outstanding gelling properties, biocompatibility and compostability. Surfactants, such as sodium dodecyl sulfate (SDS), are often used in hydrogels solutions as surface modifiers or solubility enhancers, and their incorporation can influence the hydrogel’s viscoelastic properties and, in turn, its processing and applications. Literature usually focuses on studying the impact of formulation parameters (e.g., gelatine content, gelatine strength, additives incorporation) on gelatine hydrogels properties, but processing parameters, such as curing temperature, are commonly overlooked. For example, some authors have reported a decrease in gel strength at lower curing temperatures, but there is a lack of research on systematic viscoelastic characterisation of high strength gelatine and gelatine-SDS systems at a wide range of curing temperatures. This knowledge is essential to meet and adjust the technological requirements for different applications (e.g., viscosity, setting time, gel strength or melting/gelling temperature). This work investigated the effect of curing temperature (10, 15, 20, 23 and 25 and 30°C) on the elastic modulus (G’) and melting temperature of high strength gelatine-SDS hydrogels, at 10 wt% and 20 wt% gelatine contents, by small-amplitude oscillatory shear rheology coupled with Fourier Transform Infrared Spectroscopy. It also correlates the gel strength obtained by rheological measurements with the gel strength measured by texture analysis. Gelatine and gelatine-SDS hydrogels’ rheological behaviour strongly depended on the curing temperature, and its gel strength and melting temperature can be slightly modified to adjust it to given processing and applications needs. Lower curing temperatures led to gelatine and gelatine-SDS hydrogels with considerably higher storage modulus. However, their melting temperature was lower than those gels cured at higher temperatures and lower gel strength. This effect was more considerable at longer timescales. This behaviour is attributed to the development of thermal-resistant structures in the lower strength gels cured at higher temperatures.Keywords: gelatine gelation kinetics, gelatine-SDS interactions, gelatine-surfactant hydrogels, melting and gelling temperature of gelatine gels, rheology of gelatine hydrogels
Procedia PDF Downloads 102766 Evaluation of Age-Friendly Nursing Service System: KKU (AFNS:KKU) Model for the Excellence
Authors: Roongtiwa Chobchuen, Siriporn Mongkholthawornchai, Boonsong Hatawaikarn, Uriwan Chaichangreet, Kobkaew Thongtid, Pusda Pukdeekumjorn, Panita Limpawattana
Abstract:
Background: Age-friendly nursing service system in Srinagarind Hospital has been developed continuously based on the value and cultural background of Thailand which corporates with the modified WHO’s Age friendly Primary Care Service System. It consists of 3 issues; 1) development of staff training, 2) age-friendly service and 3) appropriate physical environment. Objective: To evaluate the efficacy of Age-friendly Nursing Service System: KKU (AFNS:KKU) model and to evaluate factors associated with nursing perception with AFN:KKU. Study design: Descriptive study Setting: 31 wards that served older patients in Srinagarind Hospital Populations: Nursing staff from 11 departments (31 wards) Instrument: Age-friendly nursing care scale as perceived by hospitalized older person Procedure and statistical analysis: All participants were asked questions using age-friendly nursing care scale as perceived by hospitalized older person questionnaires. Descriptive statistics and multiple logistic regression analyses were used to analyse the outcomes. Results: There were 337 participants recruited in this study. The majority of them were women (92%) with the mean ages of 29 years and 77.45% were nurse practitioners. They had average nursing experiences of 5 years. The average scores of age-friendly nursing care scale were high and highest in the area of attitude and communication. Age, sex, educational level, duration of work among, and having experience in aging training were not associated with nursing perception where type of department was an independent factor. Nurses from department of Surgery and Orthopedic, Eye and ENT, special ward and Obstetrics and Gynecological had significant greater perception than nurses from Internal Medicine Department (p < 0.05). Conclusion: Nurses had high scores in all dimensions of age-friendly concept. The result indicates that nurses have good attitude to aging care which can lead to improve quality of care. Organization should support other domains of ageing care to achieve greater effectiveness in geriatric care.Keywords: age-friendly, nursing service system, excellence model, geriatric care
Procedia PDF Downloads 345765 Developing Customizable Scaffolds With Antimicrobial Properties for Vascular Tissue Regeneration Using Low Temperature Plasma
Authors: Komal Vig, Syamala Soumyakrishnan, Yadav Baral
Abstract:
Bypass surgery, using the autologous vein has been one of the most effective treatments for cardiovascular diseases (CVD). More recently tissue engineering including engineered vascular grafts to synthesize blood vessels is gaining usage. Dacron and ePTFE has been employed for vascular grafts, however, these does not work well for small diameter grafts (<6 mm) due to intimal hyperplasia and thrombosis. In the present study PTFE was treated with LTP to improve the endothelialization of intimal surface of graft. Scaffolds were also modified with polyvinylpyrrolidone coated silver nanoparticles (Ag-PVP) and the antimicrobial peptides, p753 and p359. Human umbilical vein endothelial cells (HUVEC) were plated on the developed scaffolds and cell proliferation was determined by the MTT assay. Cells attachment on scaffolds was visualized by microscopy. mRNA expressions levels of different cell markers were investigated using quantitative real-time PCR (qPCR). X ray photoelectron spectroscopic confirmed the introduction of oxygenated functionalities from LTP air plasma. Microscopic and MTT assays indicated increase in cell viability in LTP treated scaffolds. Gene expression studies shows enhanced expression of cell adhesion marker Integrin- α 5 gene after LTP treatment. The KB test displayed a zone of inhibition for Ag-PVP, p753 and p359 of 19mm, 14mm, and 12mm respectively. To determine toxicity of antimicrobial agents to cells, MTT Assay was performed using HEK293 cells. MTT Assay exhibited that Ag-PVP and the peptides were non-toxic to cells at 100μg/mL and 50μg/mL, respectively. Live/dead analysis and plate count of treated bacteria exhibited bacterial inhibition on develop scaffold compared to non-treated scaffold. SEM was performed to analyze the structural changes of bacteria after treatment with antimicrobial agents. Gene expression studies were conducted on RNA from bacteria treated with Ag-PVP and peptides using qRT-PCR. Based on our initial results, more scaffolds alternatives will be developed and investigated for cell growth and vascularization studies.Keywords: low temperature plasma, vascular graft, HUVEC cells, antimicrobial
Procedia PDF Downloads 245764 A One-Dimensional Model for Contraction in Burn Wounds: A Sensitivity Analysis and a Feasibility Study
Authors: Ginger Egberts, Fred Vermolen, Paul van Zuijlen
Abstract:
One of the common complications in post-burn scars is contractions. Depending on the extent of contraction and the wound dimensions, the contracture can cause a limited range-of-motion of joints. A one-dimensional morphoelastic continuum hypothesis-based model describing post-burn scar contractions is considered. The beauty of the one-dimensional model is the speed; hence it quickly yields new results and, therefore, insight. This model describes the movement of the skin and the development of the strain present. Besides these mechanical components, the model also contains chemical components that play a major role in the wound healing process. These components are fibroblasts, myofibroblasts, the so-called signaling molecules, and collagen. The dermal layer is modeled as an isotropic morphoelastic solid, and pulling forces are generated by myofibroblasts. The solution to the model equations is approximated by the finite-element method using linear basis functions. One of the major challenges in biomechanical modeling is the estimation of parameter values. Therefore, this study provides a comprehensive description of skin mechanical parameter values and a sensitivity analysis. Further, since skin mechanical properties change with aging, it is important that the model is feasible for predicting the development of contraction in burn patients of different ages, and hence this study provides a feasibility study. The variability in the solutions is caused by varying the values for some parameters simultaneously over the domain of computation, for which the results of the sensitivity analysis are used. The sensitivity analysis shows that the most sensitive parameters are the equilibrium concentration of collagen, the apoptosis rate of fibroblasts and myofibroblasts, and the secretion rate of signaling molecules. This suggests that most of the variability in the evolution of contraction in burns in patients of different ages might be caused mostly by the decreasing equilibrium of collagen concentration. As expected, the feasibility study shows this model can be used to show distinct extents of contractions in burns in patients of different ages. Nevertheless, contraction formation in children differs from contraction formation in adults because of the growth. This factor has not been incorporated in the model yet, and therefore the feasibility results for children differ from what is seen in the clinic.Keywords: biomechanics, burns, feasibility, fibroblasts, morphoelasticity, sensitivity analysis, skin mechanics, wound contraction
Procedia PDF Downloads 160763 Carbon Nanotube Field Effect Transistor - a Review
Authors: P. Geetha, R. S. D. Wahida Banu
Abstract:
The crowning advances in Silicon based electronic technology have dominated the computation world for the past decades. The captivating performance of Si devices lies in sustainable scaling down of the physical dimensions, by that increasing device density and improved performance. But, the fundamental limitations due to physical, technological, economical, and manufacture features restrict further miniaturization of Si based devices. The pit falls are due to scaling down of the devices such as process variation, short channel effects, high leakage currents, and reliability concerns. To fix the above-said problems, it is needed either to follow a new concept that will manage the current hitches or to support the available concept with different materials. The new concept is to design spintronics, quantum computation or two terminal molecular devices. Otherwise, presently used well known three terminal devices can be modified with different materials that suits to address the scaling down difficulties. The first approach will occupy in the far future since it needs considerable effort; the second path is a bright light towards the travel. Modelling paves way to know not only the current-voltage characteristics but also the performance of new devices. So, it is desirable to model a new device of suitable gate control and project the its abilities towards capability of handling high current, high power, high frequency, short delay, and high velocity with excellent electronic and optical properties. Carbon nanotube became a thriving material to replace silicon in nano devices. A well-planned optimized utilization of the carbon material leads to many more advantages. The unique nature of this organic material allows the recent developments in almost all fields of applications from an automobile industry to medical science, especially in electronics field-on which the automation industry depends. More research works were being done in this area. This paper reviews the carbon nanotube field effect transistor with various gate configurations, number of channel element, CNT wall configurations and different modelling techniques.Keywords: array of channels, carbon nanotube field effect transistor, double gate transistor, gate wrap around transistor, modelling, multi-walled CNT, single-walled CNT
Procedia PDF Downloads 327762 Modelling Forest Fire Risk in the Goaso Forest Area of Ghana: Remote Sensing and Geographic Information Systems Approach
Authors: Bernard Kumi-Boateng, Issaka Yakubu
Abstract:
Forest fire, which is, an uncontrolled fire occurring in nature has become a major concern for the Forestry Commission of Ghana (FCG). The forest fires in Ghana usually result in massive destruction and take a long time for the firefighting crews to gain control over the situation. In order to assess the effect of forest fire at local scale, it is important to consider the role fire plays in vegetation composition, biodiversity, soil erosion, and the hydrological cycle. The occurrence, frequency and behaviour of forest fires vary over time and space, primarily as a result of the complicated influences of changes in land use, vegetation composition, fire suppression efforts, and other indigenous factors. One of the forest zones in Ghana with a high level of vegetation stress is the Goaso forest area. The area has experienced changes in its traditional land use such as hunting, charcoal production, inefficient logging practices and rural abandonment patterns. These factors which were identified as major causes of forest fire, have recently modified the incidence of fire in the Goaso area. In spite of the incidence of forest fires in the Goaso forest area, most of the forest services do not provide a cartographic representation of the burned areas. This has resulted in significant amount of information being required by the firefighting unit of the FCG to understand fire risk factors and its spatial effects. This study uses Remote Sensing and Geographic Information System techniques to develop a fire risk hazard model using the Goaso Forest Area (GFA) as a case study. From the results of the study, natural forest, agricultural lands and plantation cover types were identified as the major fuel contributing loads. However, water bodies, roads and settlements were identified as minor fuel contributing loads. Based on the major and minor fuel contributing loads, a forest fire risk hazard model with a reasonable accuracy has been developed for the GFA to assist decision making.Keywords: forest, GIS, remote sensing, Goaso
Procedia PDF Downloads 458761 Human Rabies Survivors in India: Epidemiological, Immunological and Virological Studies
Authors: Madhusudana S. N., Reeta Mani, Ashwini S. Satishchandra P., Netravati, Udhani V., Fiaz A., Karande S.
Abstract:
Rabies is an acute encephalitis which is considered 100% fatal despite occasional reports of survivors. However, in recent times more cases of human rabies survivors are being reported. In the last 5 years, there are six laboratories confirmed human rabies survivors in India alone. All cases were children below 15 years and all contracted the disease by dog bites. All of them also had received the full or partial course of rabies vaccination and 4 out of 6 had also received rabies immunoglobulin. All cases were treated in intensive care units in hospitals at Bangalore, Mumbai, Chandigarh, Lucknow and Goa. We report here the results of immunological and virological studies conducted at our laboratory on these patients. The clinical samples that were obtained from these patients were Serum, CSF, nuchal skin biopsy and saliva. Serum and CSF samples were subjected to standard RFFIT for estimation of rabies neutralizing antibodies. Skin biopsy, CSF and saliva were processed by TaqMan real-time PCR for detection of viral RNA. CSF, saliva and skin homogenates were also processed for virus isolation by inoculation of suckling mice. The PBMCs isolated from fresh blood was subjected to ELISPOT assay to determine the type of immune response (Th1/Th2). Both CSF and serum were also investigated for selected cytokines by Luminex assay. The level of antibodies to virus G protein and N protein were determined by ELISA. All survivors had very high titers of RVNA in serum and CSF 100 fold higher than non-survivors and vaccine controls. A five-fold rise in titer could be demonstrated in 4 out of 6 patients. All survivors had a significant increase in antibodies to G protein in both CSF and serum when compared to non-survivors. There was a profound and robust Th1 response in all survivors indicating that interferon gamma could play an important factor in virus clearance. We could isolate viral RNA in only one patient four years after he had developed symptoms. The partial N gene sequencing revealed 99% homology to species I strain prevalent in India. Levels of selected cytokines in CSF and serum did not reveal any difference between survivors and non-survivors. To conclude, survival from rabies is mediated by virus-specific immune responses of the host and clearance of rabies virus from CNS may involve the participation of both Th2 and Th1 immune responses.Keywords: rabies, rabies treatment, rabies survivors, immune reponse in rabies encephalitis
Procedia PDF Downloads 330760 Investigating Salience Theory’s Implications for Real-Life Decision Making: An Experimental Test for Whether the Allais Paradox Exists under Subjective Uncertainty
Authors: Christoph Ostermair
Abstract:
We deal with the effect of correlation between prospects on human decision making under uncertainty as proposed by the comparatively new and promising model of “salience theory of choice under risk”. In this regard, we show that the theory entails the prediction that the inconsistency of choices, known as the Allais paradox, should not be an issue in the context of “real-life decision making”, which typically corresponds to situations of subjective uncertainty. The Allais paradox, probably the best-known anomaly regarding expected utility theory, would then essentially have no practical relevance. If, however, empiricism contradicts this prediction, salience theory might suffer a serious setback. Explanations of the model for variable human choice behavior are mostly the result of a particular mechanism that does not come to play under perfect correlation. Hence, if it turns out that correlation between prospects – as typically found in real-world applications – does not influence human decision making in the expected way, this might to a large extent cost the theory its explanatory power. The empirical literature regarding the Allais paradox under subjective uncertainty is so far rather moderate. Beyond that, the results are hard to maintain as an argument, as the presentation formats commonly employed, supposably have generated so-called event-splitting effects, thereby distorting subjects’ choice behavior. In our own incentivized experimental study, we control for such effects by means of two different choice settings. We find significant event-splitting effects in both settings, thereby supporting the suspicion that the so far existing empirical results related to Allais paradoxes under subjective uncertainty may not be able to answer the question at hand. Nevertheless, we find that the basic tendency behind the Allais paradox, which is a particular switch of the preference relation due to a modified common consequence, shared by two prospects, is still existent both under an event-splitting and a coalesced presentation format. Yet, the modal choice pattern is in line with the prediction of salience theory. As a consequence, the effect of correlation, as proposed by the model, might - if anything - only weaken the systematic choice pattern behind the Allais paradox.Keywords: Allais paradox, common consequence effect, models of decision making under risk and uncertainty, salience theory
Procedia PDF Downloads 201759 Towards a Doughnut Economy: The Role of Institutional Failure
Authors: Ghada El-Husseiny, Dina Yousri, Christian Richter
Abstract:
Social services are often characterized by market failures, which justifies government intervention in the provision of these services. It is widely acknowledged that government intervention breeds corruption since resources are being transferred from one party to another. However, what is still being extensively studied is the magnitude of the negative impact of corruption on publicly provided services and development outcomes. Corruption has the power to hinder development and cripple our march towards the Sustainable Development Goals. Corruption diminishes the efficiency and effectiveness of public health and education spending and directly impacts the outcomes of these sectors. This paper empirically examines the impact of Institutional Failure on public sector services provision, with the sole purpose of studying the impact of corruption on SDG3 and 4; Good health and wellbeing and Quality education, respectively. The paper explores the effect of corruption on these goals from various perspectives and extends the analysis by examining if the impact of corruption on these goals differed when it accounted for the current corruption state. Using Pooled OLS(Ordinary Least Square) and Fixed effects panel estimation on 22 corrupt and 22 clean countries between 2000 and 2017. Results show that corruption in both corrupt and clean countries has a more severe impact on Health than the Education sector. In almost all specifications, corruption has an insignificant effect on School Enrollment rates but a significant effect on Infant Mortality rates. Results further indicate that, on average, a 1 point increase in the CPI(Consumer Price Index) can increase health expenditures by 0.116% in corrupt and clean countries. However, the fixed effects model indicates that the way Health and Education expenditures are determined in clean and corrupt countries are completely country-specific, in which corruption plays a minimal role. Moreover, the findings show that School Enrollment rates and Infant Mortality rates depend, to a large extent, on public spending. The most astounding results-driven is that corrupt countries, on average, have more effective and efficient healthcare expenditures. While some insights are provided as to why these results prevail, they should be further researched. All in all, corruption impedes development outcomes, and any Anti-corrupt policies taken will bring forth immense improvements and speed up the march towards sustainability.Keywords: corruption, education, health, public spending, sustainable development
Procedia PDF Downloads 171758 Islamic Finance and Trade Promotion in the African Continental Free Trade Area: An Exploratory Study
Authors: Shehu Usman Rano Aliyu
Abstract:
Despite the significance of finance as a major trade lubricant, evidence in the literature alludes to its scarcity and increasing cost, especially in developing countries where small and medium-scale enterprises are worst affected. The creation of the African Continental Free Trade Area (AFCFTA) in 2018, an organ of the African Union (AU), was meant to serve as a beacon for deepening economic integration through the removal of trade barriers inhibiting intra-African trade and movement of persons, among others. Hence, this research explores the role Islamic trade finance (ITF) could play in spurring intra- and inter-African trade. The study involves six countries; Egypt, Kenya, Malaysia, Morocco, Nigeria, and Saudi Arabia, and employs survey research, a total of 430 sample data, and SmartPLS Structural Equation Modelling (SEM) techniques in its analyses. We find strong evidence that Shari’ah, legal and regulatory compliance issues of the ITF institutions rhythm with the internal, national, and international compliance requirements equally as the unique instruments applied in ITF. In addition, ITF was found to be largely driven by global economic and political stability, socially responsible finance, ethical and moral considerations, risk-sharing, and resilience of the global Islamic finance industry. Further, SMEs, Governments, and Importers are the major beneficiary sectors. By and large, AfCFTA’s protocols align with the principles of ITF and are therefore suited for the proliferation of Islamic finance in the continent. And, while AML/KYC and BASEL requirements, compliance to AAOIFI and IFSB standards, paucity of Shari'ah experts, threats to global security, and increasing global economic uncertainty pose as major impediments, the future of ITF would be shaped by a greater need for institutional and policy support, global economic cum political stability, robust regulatory framework, and digital technology/fintech. The study calls for the licensing of more ITF institutions in the continent, participation of multilateral institutions in ITF, and harmonization of Shariah standards.Keywords: AfCFTA, islamic trade finance, murabaha, letter of credit, forwarding
Procedia PDF Downloads 57757 Short Life Cycle Time Series Forecasting
Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar
Abstract:
The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.Keywords: forecast, short life cycle product, structured judgement, time series
Procedia PDF Downloads 360756 Suitability of Wood Sawdust Waste Reinforced Polymer Composite for Fireproof Doors
Authors: Timine Suoware, Sylvester Edelugo, Charles Amgbari
Abstract:
The susceptibility of natural fibre polymer composites to flame has necessitated research to improve and develop flame retardant (FR) to delay the escape of combustible volatiles. Previous approaches relied mostly on FR such as aluminium tri-hydroxide (ATH) and ammonium polyphosphate (APP) to improve fire performances of wood sawdust polymer composites (WSPC) with emphasis on non-structural building applications. In this paper, APP was modified with gum Arabic powder (GAP) and then hybridized with ATH at 0, 12 and 18% loading ratio to form new FR species; WSPC12%APP-GAP and WSPC18%ATH/APP-GAP. The FR species were incorporated in wood sawdust waste reinforced in polyester resin to form panels for fireproof doors. The panels were produced using hand lay compression moulding technique and cured at room temperature. Specimen cut from panels were then tested for tensile strength (TS), flexural strength (FS) and impact strength (IS) using universal testing machine and impact tester; thermal stability using (TGA/DSC 1: Metler Toledo); time-to-ignition (Tig), heat release rates (HRR); peak HRR (HRRp), average HRR (HRRavg), total HRR (THR), peak mass loss rate (MLRp), average smoke production rate (SPRavg) and carbon monoxide production (COP ) were obtained using the cone calorimeter apparatus. From the mechanical properties obtained, improvements of IS for the panels were not noticeable whereas TS and FS for WSPC12%APP-GAP respectively stood at 12.44 MPa and 85.58 MPa more than those without FR (WSPC0%). For WSC18%ATH/APP-GAP TS and FS respectively stood at 16.45 MPa and 50.49 MPa more compared to (WSPC0%). From the thermal analysis, the panels did not exhibit any significant change as early degradation was observed. At 900 OC, the char residues improved by 15% for WSPC12%APP-GAP and 19% for WSPC18%ATH/APP-GAP more than (WSC0%) at 5%, confirming the APP-GAP to be a good FR. At 50 kW/m2 heat flux (HF), WSPC12%APP-GAP improved better the fire behaviour of the panels when compared to WSC0% as follows; Tig = 46 s, HRRp = 56.1 kW/2, HRRavg = 32.8 kW/m2, THR = 66.6 MJ/m2, MLRp = 0.103 g/s, TSR = 0.04 m2/s and COP = 0.051 kg/kg. These were respectively more than WSC0%. It can be concluded that the new concept of modifying FR with GAP in WSC could meet the requirement of a fireproof door for building applications.Keywords: composite, flame retardant, wood sawdust, fireproof doors
Procedia PDF Downloads 107755 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector
Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar
Abstract:
Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.Keywords: appliances efficiency improvement, energy star, market penetration, residential sector
Procedia PDF Downloads 288754 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery
Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi
Abstract:
Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network
Procedia PDF Downloads 79753 Adsorption: A Decision Maker in the Photocatalytic Degradation of Phenol on Co-Catalysts Doped TiO₂
Authors: Dileep Maarisetty, Janaki Komandur, Saroj S. Baral
Abstract:
In the current work, photocatalytic degradation of phenol was carried both in UV and visible light to find the slowest step that is limiting the rate of photo-degradation process. Characterization such as XRD, SEM, FT-IR, TEM, XPS, UV-DRS, PL, BET, UPS, ESR and zeta potential experiments were conducted to assess the credibility of catalysts in boosting the photocatalytic activity. To explore the synergy, TiO₂ was doped with graphene and alumina. The orbital hybridization with alumina doping (mediated by graphene) resulted in higher electron transfer from the conduction band of TiO₂ to alumina surface where oxygen reduction reactions (ORR) occur. Besides, the doping of alumina and graphene introduced defects into Ti lattice and helped in improving the adsorptive properties of modified photo-catalyst. Results showed that these defects promoted the oxygen reduction reactions (ORR) on the catalyst’s surface. ORR activity aims at producing reactive oxygen species (ROS). These ROS species oxidizes the phenol molecules which is adsorbed on the surface of photo-catalysts, thereby driving the photocatalytic reactions. Since mass transfer is considered as rate limiting step, various mathematical models were applied to the experimental data to probe the best fit. By varying the parameters, it was found that intra-particle diffusion was the slowest step in the degradation process. Lagergren model gave the best R² values indicating the nature of rate kinetics. Similarly, different adsorption isotherms were employed and realized that Langmuir isotherm suits the best with tremendous increase in uptake capacity (mg/g) of TiO₂-rGO-Al₂O₃ as compared undoped TiO₂. This further assisted in higher adsorption of phenol molecules. The results obtained from experimental, kinetic modelling and adsorption isotherms; it is concluded that apart from changes in surface, optoelectronic and morphological properties that enhanced the photocatalytic activity, the intra-particle diffusion within the catalyst’s pores serve as rate-limiting step in deciding the fate of photo-catalytic degradation of phenol.Keywords: ORR, phenol degradation, photo-catalyst, rate kinetics
Procedia PDF Downloads 144752 Modeling Acceptability of a Personalized and Contextualized Radio Embedded in Vehicles
Authors: Ludivine Gueho, Sylvain Fleury, Eric Jamet
Abstract:
Driver distraction is known to be a major contributing factor of car accidents. Since many years, constructors have been designing embedded technologies to face this problem and reduce distraction. Being able to predict user acceptance would further be helpful in the development process to build appropriate systems. The present research aims at modelling the acceptability of a specific system, an innovative personalized and contextualized embedded radio, through an online survey of 202 people in France that assessed the psychological variables determining intentions to use the system. The questionnaire instantiated the dimensions of the extended version of the UTAUT acceptability model. Because of the specific features of the system assessed, we added 4 dimensions: perceived security, anxiety, trust and privacy concerns. Results showed that hedonic motivation, i.e., the fun or pleasure derived from using a technology, and performance expectancy, i.e., the degree to which individuals believe that the characteristics of the system meet their needs, are the most important dimensions in determining behavioral intentions about the innovative radio. To a lesser extent, social influence, i.e., the degree to which individuals think they can use the system while respecting their social group’s norms and while giving a positive image of themselves, had an effect on behavioral intentions. Moreover, trust, that is, the positive belief about the perceived reliability of, dependability of, and confidence in a person, object or process, had a significant effect, mediated by performance expectancy. In an applicative way, the present research reveals that, to be accepted, in-car embedded new technology has to address individual needs, for instance by facilitating the driving activity or by providing useful information. If it shows hedonic qualities by being entertaining, pretty or comfortable, this may improve the intentions to use it. Therefore, it is clearly important to include reflection about user experience in the design process. Finally, the users have to be reassured on the system’s reliability. For example, improving the transparency of the system by providing information about the system functioning, could improve trust. These results bring some highlights on determinant of acceptance of an in-vehicle technology and are useful for constructors to design acceptable systems.Keywords: acceptability, innovative embedded radio, structural equation, user-centric evaluation, UTAUT
Procedia PDF Downloads 272751 Infestation in Omani Date Palm Orchards by Dubas Bug Is Related to Tree Density
Authors: Lalit Kumar, Rashid Al Shidi
Abstract:
Phoenix dactylifera (date palm) is a major crop in many middle-eastern countries, including Oman. The Dubas bug Ommatissus lybicus is the main pest that affects date palm crops. However not all plantations are infested. It is still uncertain why some plantations get infested while others are not. This research investigated whether tree density and the system of planting (random versus systematic) had any relationship with infestation and levels of infestation. Remote Sensing and Geographic Information Systems were used to determine the density of trees (number of trees per unit area) while infestation levels were determined by manual counting of insects on 40 leaflets from two fronds on each tree, with a total of 20-60 trees in each village. The infestation was recorded as the average number of insects per leaflet. For tree density estimation, WorldView-3 scenes, with eight bands and 2m spatial resolution, were used. The Local maxima method, which depends on locating of the pixel of highest brightness inside a certain exploration window, was used to identify the trees in the image and delineating individual trees. This information was then used to determine whether the plantation was random or systematic. The ordinary least square regression (OLS) was used to test the global correlation between tree density and infestation level and the Geographic Weight Regression (GWR) was used to find the local spatial relationship. The accuracy of detecting trees varied from 83–99% in agricultural lands with systematic planting patterns to 50–70% in natural forest areas. Results revealed that the density of the trees in most of the villages was higher than the recommended planting number (120–125 trees/hectare). For infestation correlations, the GWR model showed a good positive significant relationship between infestation and tree density in the spring season with R² = 0.60 and medium positive significant relationship in the autumn season, with R² = 0.30. In contrast, the OLS model results showed a weaker positive significant relationship in the spring season with R² = 0.02, p < 0.05 and insignificant relationship in the autumn season with R² = 0.01, p > 0.05. The results showed a positive correlation between infestation and tree density, which suggests the infestation severity increased as the density of date palm trees increased. The correlation result showed that the density alone was responsible for about 60% of the increase in the infestation. This information can be used by the relevant authorities to better control infestations as well as to manage their pesticide spraying programs.Keywords: dubas bug, date palm, tree density, infestation levels
Procedia PDF Downloads 193750 High Altitude Glacier Surface Mapping in Dhauliganga Basin of Himalayan Environment Using Remote Sensing Technique
Authors: Aayushi Pandey, Manoj Kumar Pandey, Ashutosh Tiwari, Kireet Kumar
Abstract:
Glaciers play an important role in climate change and are sensitive phenomena of global climate change scenario. Glaciers in Himalayas are unique as they are predominantly valley type and are located in tropical, high altitude regions. These glaciers are often covered with debris which greatly affects ablation rate of glaciers and work as a sensitive indicator of glacier health. The aim of this study is to map high altitude Glacier surface with a focus on glacial lake and debris estimation using different techniques in Nagling glacier of dhauliganga basin in Himalayan region. Different Image Classification techniques i.e. thresholding on different band ratios and supervised classification using maximum likelihood classifier (MLC) have been used on high resolution sentinel 2A level 1c satellite imagery of 14 October 2017.Here Near Infrared (NIR)/Shortwave Infrared (SWIR) ratio image was used to extract the glaciated classes (Snow, Ice, Ice Mixed Debris) from other non-glaciated terrain classes. SWIR/BLUE Ratio Image was used to map valley rock and Debris while Green/NIR ratio image was found most suitable for mapping Glacial Lake. Accuracy assessment was performed using high resolution (3 meters) Planetscope Imagery using 60 stratified random points. The overall accuracy of MLC was 85 % while the accuracy of Band Ratios was 96.66 %. According to Band Ratio technique total areal extent of glaciated classes (Snow, Ice ,IMD) in Nagling glacier was 10.70 km2 nearly 38.07% of study area comprising of 30.87 % Snow covered area, 3.93% Ice and 3.27 % IMD covered area. Non-glaciated classes (vegetation, glacial lake, debris and valley rock) covered 61.93 % of the total area out of which valley rock is dominant with 33.83% coverage followed by debris covering 27.7 % of the area in nagling glacier. Glacial lake and Debris were accurately mapped using Band ratio technique Hence, Band Ratio approach appears to be useful for the mapping of debris covered glacier in Himalayan Region.Keywords: band ratio, Dhauliganga basin, glacier mapping, Himalayan region, maximum likelihood classifier (MLC), Sentinel-2 satellite image
Procedia PDF Downloads 230749 Dynamic Capability: An Exploratory Study Applied to Social Enterprise in South East Asia
Authors: Atiwat Khatpibunchai, Taweesak Kritjaroen
Abstract:
A social enterprise is the innovative hybrid organizations where its ultimate goal is to generate revenue and use it as a fund to solve the social and environmental problem. Although the evidence shows the clear value of economic, social and environmental aspects, the limitations of most of the social enterprises are the expanding impact of social and environmental aspects through the normal market mechanism. This is because the major sources of revenues of social enterprises derive from the business advocates who merely wish to support society and environment by using products and services of social enterprises rather than expect the satisfaction and the distinctive advantage of products and services. Thus, social enterprises cannot reach the achievement as other businesses do. The relevant concepts from the literature review revealed that dynamic capability is the ability to sense, integrate and reconfigure internal resources and utilize external resources to adapt to changing environments, create innovation and achieve competitive advantage. The objective of this research is to study the influence of dynamic capability that affects competitive advantage and sustainable performance, as well as to determine important elements of dynamic capability. The researchers developed a conceptual model from the related concepts and theories of dynamic capability. A conceptual model will support and show the influence of dynamic capability on competitive advantage and sustainable performance of social enterprises. The 230 organizations in South-East Asia served as participants in this study. The results of the study were analyzed by the structural equation model (SEM) and it was indicated that research model is consistent with empirical research. The results also demonstrated that dynamic capability has a direct and indirect influence on competitive advantage and sustainable performance. Moreover, it can be summarized that dynamic capability consists of the five elements: 1) the ability to sense an opportunity; 2) the ability to seize an opportunity; 3) the ability to integrate resources; 4) the ability to absorb resources; 5) the ability to create innovation. The study recommends that related sectors can use this study as a guideline to support and promote social enterprises. The focus should be pointed to the important elements of dynamic capability that are the development of the ability to transform existing resources in the organization and the ability to seize opportunity from changing market.Keywords: dynamic capability, social enterprise, sustainable competitive advantage, sustainable performance
Procedia PDF Downloads 252748 Optimal Pricing Based on Real Estate Demand Data
Authors: Vanessa Kummer, Maik Meusel
Abstract:
Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning
Procedia PDF Downloads 290747 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 241746 Objectifying Media and Preadolescents' Media Internalization: A Developmental Perspective
Authors: Ann Rousseau, Steven Eggermont
Abstract:
The current study sought to explain pre-adolescents’ differential susceptibility to the internalization of mediated appearance ideals, using a three-wave panel survey of preadolescent girls and boys (N = 973, Mage = 11.14). Based on the premises of objectification theory and sexual script theory, we proposed a double role for pubertal timing and cross-sex interactions in preadolescents’ media internalization. More specifically, we expected pubertal timing and cross-sex interactions to (a) trigger higher levels of media internalization, directly and indirectly via body surveillance, and (b) positively moderate the relationship between objectifying media exposure and girls’ and boys’ media internalization. A first cross-lagged model tested whether the pubertal timing and cross-sex interactions could trigger preadolescents media internalization and body surveillance. Structural equation analysis indicated that pubertal timing (Wave1) positively predicted body surveillance and media internalization (both Wave3). Cross-sex involvement (Wave1) was positively linked to media internalization (Wave2), but body surveillance (Wave2) was not associated with cross-sex interactions. Results also showed a reciprocal interaction between media internalization (Wave 2 and 3) and body surveillance (Wave2 and 3). Multiple group analysis showed that the observed relationships did not vary by gender. A second moderated moderation model examined whether (a) the relationship between objectifying media exposure (television and magazines, both Wave1) and media internalization (Wave3) depended on pubertal timing (Wave1), and (b) the two-way interaction between objectifying media exposure (Wave1) and pubertal timing (Wave1) varied depending on cross-sex interactions (Wave1). Results revealed that cross-sex interactions functioned as a buffer against media internalization. For preadolescents who had fewer cross-sex interactions, early puberty (relative to peers) positively moderated the relationship between magazine exposure and the internalization of mediated appearance ideals. No significant relationships were found for television. Again, no gender difference could be observed. The present study suggests a double role for pubertal timing and cross-sex interactions in preadolescents media internalization, and indicate that early developers with few cross-sex experiences are particularly vulnerable for media internalization. Additionally, the current findings suggest that there is relative gender equity in magazines’ ability to cultivate media internalization among preadolescents.Keywords: cross-sex interactions, media effects, objectification theory, pubertal timing
Procedia PDF Downloads 329745 West Nile Virus in North-Eastern Italy: Overview of Integrated Surveillance Activities
Authors: Laura Amato, Paolo Mulatti, Fabrizio Montarsi, Matteo Mazzucato, Laura Gagliazzo, Michele Brichese, Manlio Palei, Gioia Capelli, Lebana Bonfanti
Abstract:
West Nile virus (WNV) re-emerged in north-eastern Italy in 2008, after ten years from its first appearance in Tuscany. In 2009, a national surveillance programme was implemented, and re-modulated in north-eastern Italy in 2011. Hereby, we present the results of surveillance activities in 2008-2016 in the north-eastern Italian regions, with inferences on WNV epidemiological trend in the area. The re-modulated surveillance programmes aimed at early detecting WNV seasonal reactivation by searching IgM antibodies in horses. In 2013, the surveillance plans were further modified including a risk-based approach. Spatial analysis techniques, including Bernoulli space-time scan-statistics, were applied to the results of 2010–2012 surveillance on mosquitoes, equines, and humans to identify areas where WNV reactivation was more likely to occur. From 2008 to 2016, residential horses tested positive for anti-WNV antibodies on a yearly basis (503 cases), also in areas where WNV circulation was not detected in mosquito populations. Surveillance activities detected 26 syndromic cases in horses, 102 infected mosquito pools and WNV in 18 dead wild birds. Human cases were also recurrently detected in the study area during the surveillance period (68 cases of West Nile neuroinvasive disease). The recurrent identification of WNV in animals, mosquitoes, and humans indicates the virus has likely become endemic in the area. In 2016, findings of WNV positives in horses or mosquitoes were included as triggers for enhancing screening activities in humans. The evolution of the epidemiological situation prompts for continuous and accurate surveillance measures. The results of the 2013-2016 surveillance indicate that the risk-based approach was effective in early detecting seasonal reactivation of WNV, key factor of the integrated surveillance strategy in endemic areas.Keywords: arboviruses, horses, Italy, surveillance, west nile virus, zoonoses
Procedia PDF Downloads 358744 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition
Authors: Habtamu Garoma Debela, Gemechis File Duressa
Abstract:
In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent
Procedia PDF Downloads 143