Search results for: Large Scale Floating Solar
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13036

Search results for: Large Scale Floating Solar

616 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications

Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino

Abstract:

The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.

Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses

Procedia PDF Downloads 181
615 Assessment of Influence of Short-Lasting Whole-Body Vibration on Joint Position Sense and Body Balance–A Randomised Masked Study

Authors: Anna Slupik, Anna Mosiolek, Sebastian Wojtowicz, Dariusz Bialoszewski

Abstract:

Introduction: Whole-body vibration (WBV) uses high frequency mechanical stimuli generated by a vibration plate and transmitted through bone, muscle and connective tissues to the whole body. Research has shown that long-term vibration-plate training improves neuromuscular facilitation, especially in afferent neural pathways, responsible for the conduction of vibration and proprioceptive stimuli, muscle function, balance and proprioception. Some researchers suggest that the vibration stimulus briefly inhibits the conduction of afferent signals from proprioceptors and can interfere with the maintenance of body balance. The aim of this study was to evaluate the influence of a single set of exercises associated with whole-body vibration on the joint position sense and body balance. Material and methods: The study enrolled 55 people aged 19-24 years. These individuals were randomly divided into a test group (30 persons) and a control group (25 persons). Both groups performed the same set of exercises on a vibration plate. The following vibration parameters: frequency of 20Hz and amplitude of 3mm, were used in the test group. The control group performed exercises on the vibration plate while it was off. All participants were instructed to perform six dynamic exercises lasting 30 seconds each with a 60-second period of rest between them. The exercises involved large muscle groups of the trunk, pelvis and lower limbs. Measurements were carried out before and immediately after exercise. Joint position sense (JPS) was measured in the knee joint for the starting position at 45° in an open kinematic chain. JPS error was measured using a digital inclinometer. Balance was assessed in a standing position with both feet on the ground with the eyes open and closed (each test lasting 30 sec). Balance was assessed using Matscan with FootMat 7.0 SAM software. The surface of the ellipse of confidence and front-back as well as right-left swing were measured to assess balance. Statistical analysis was performed using Statistica 10.0 PL software. Results: There were no significant differences between the groups, both before and after the exercise (p> 0.05). JPS did not change in both the test (10.7° vs. 8.4°) and control groups (9.0° vs. 8.4°). No significant differences were shown in any of the test parameters during balance tests with the eyes open or closed in both the test and control groups (p> 0.05). Conclusions. 1. Deterioration in proprioception or balance was not observed immediately after the vibration stimulus. This suggests that vibration-induced blockage of proprioceptive stimuli conduction can have only a short-lasting effect that occurs only as long as a vibration stimulus is present. 2. Short-term use of vibration in treatment does not impair proprioception and seems to be safe for patients with proprioceptive impairment. 3. These results need to be supplemented with an assessment of proprioception during the application of vibration stimuli. Additionally, the impact of vibration parameters used in the exercises should be evaluated.

Keywords: balance, joint position sense, proprioception, whole body vibration

Procedia PDF Downloads 328
614 Implication of Woman’s Status on Child Health in India

Authors: Rakesh Mishra

Abstract:

India’s Demography has always amazed the world because of its unprecedented outcomes in the presence of multifaceted socioeconomic and geographical characteristics. Being the first one to implement family panning in 1952, it occupies 2nd largest population of the world, with some of its state like Uttar Pradesh contributing 5th largest population to the world population surpassing Brazil. Being the one with higher in number it is more prone to the demographic disparity persisting into its territories brought upon by the inequalities in availability, accessibility and attainability of socioeconomic and various other resources. Fifth goal of Millennium Development Goal emphasis to improve maternal and child health across the world as Children’s development is very important for the overall development of society and the best way to develop national human resources is to take care of children. The target is to reduce the infant deaths by three quarters between 1990 and 2015. Child health status depends on the care and delivery by trained personnel, particularly through institutional facilities which is further associated with the status of the mother. However, delivery in institutional facilities and delivery by skilled personnel are rising slowly in India. The main objective of the present study is to measure the child health status on based on the educational and occupational background of the women in India. Study indicates that women education plays a very crucial role in deciding the health of the new born care and access to family planning, but the women autonomy indicates to have mixed results in different states of India. It is observed that rural women are 1.61 times more likely to exclusive breastfed their children compared to urban women. With respect to Hindu category, women belonging to other religious community were 21 percent less likely to exclusive breastfed their child. Taking scheduled caste as reference category, the odds of exclusive breastfeeding is found to be decreasing in comparison to other castes, and it is found to be significant among general category. Women of high education status have higher odds of using family planning methods in most of the southern states of India. By and large, girls and boys are about equally undernourished. Under nutrition is generally lower for first births than for subsequent births and consistently increases with increasing birth order for all measures of nutritional status. It is to be noted that at age 12-23 months, when many children are being weaned from breast milk, 30 percent of children are severely stunted and around 21 percent are severely underweight. So, this paper presents the evidence on the patterns of prevailing child health status in India and its states with reference to the mother socioeconomics and biological characteristics and examines trends in these, and discusses plausible explanations.

Keywords: immunization, exclusive breastfeeding, under five mortality, binary logistic regression, ordinal regression and life table

Procedia PDF Downloads 265
613 Leveraging Digital Cyber Technology for Self-Care and Improved Management of DMPA-SC Clients

Authors: Oluwaseun Adeleke, Grace Amarachi Omenife, Jennifer Adebambo, Mopelola Raji, Anthony Nwala, Mogbonjubade Adesulure

Abstract:

Introduction: The incorporation of digital technology in healthcare systems is instrumental in transforming the delivery, management, and overall experience of healthcare and holds the potential to scale up access through over 200 million active mobile phones used in Nigeria. Digital tools enable increased access to care, stronger client engagement, progress in research and data-driven insights, and more effective promotion of self-care and do-it-yourself practices. The Delivering Innovation in Self-Care (DISC) project 2021 has played a pivotal role in granting women greater autonomy over their sexual and reproductive health (SRH) through a variety of approaches, including information and training to self-inject contraception (DMPA-SC). To optimize its outcomes, the project also leverages digital technology platforms like social media: Facebook, Instagram, and Meet Tina (Chatbot) via WhatsApp, Customer Relationship Management (CRM) applications Freshworks, and Viamo. Methodology: The project has been successful at optimizing in-person digital cyberspace interaction to sensitize individuals effectively about self-injection and provide linkages to SI services. This platform employs the Freshworks CRM software application, along with specially trained personnel known as Cyber IPC Agents and DHIS calling centers. Integration of Freshworks CRM software with social media allows a direct connection with clients to address emerging issues, schedule follow-ups, send reminders to improve compliance with self-injection schedules, enhance the overall user experience for self-injection (SI) clients, and generate comprehensive reports and analytics on client interactions. Interaction covers a range of topics, including – How to use SI, learning more about SI, side-effects and its management, accessing services, fertility, ovulation, other family planning methods, inquiries related to Sexual Reproductive Health as well as uses an address log to connect them with nearby facilities or online pharmaceuticals. Results: Between the months of March to September, a total of 5,403 engagements were recorded. Among these, 4,685 were satisfactorily resolved. Since the program's inception, digital advertising has created 233,633,075 impressions, reached 12,715,582 persons, and resulted in 3,394,048 clicks. Conclusion: Leveraging digital technology has proven to be an invaluable tool in client management and improving client experience. The use of Cyber technology has enabled the successful development and maintenance of client relationships, which have been effective at providing support, facilitating delivery and compliance with DMPA-SC self-injection services, and ensuring overall client satisfaction. Concurrently, providing qualitative data, including user experience feedback, has enabled the derivation of crucial insights that inform the decision-making process and guide in normalizing self-care behavior.

Keywords: selfcare, DMPA-SC self-injection, digital technology, cyber technology, freshworks CRM software

Procedia PDF Downloads 67
612 Genome-Wide Homozygosity Analysis of the Longevous Phenotype in the Amish Population

Authors: Sandra Smieszek, Jonathan Haines

Abstract:

Introduction: Numerous research efforts have focused on searching for ‘longevity genes’. However, attempting to decipher the genetic component of the longevous phenotype have resulted in limited success and the mechanisms governing longevity remain to be explained. We conducted a genome-wide homozygosity analysis (GWHA) of the founder population of the Amish community in central Ohio. While genome-wide association studies using unrelated individuals have revealed many interesting longevity associated variants, these variants are typically of small effect and cannot explain the observed patterns of heritability for this complex trait. The Amish provide a large cohort of extended kinships allowing for in depth analysis via family-based approach excellent population due to its. Heritability of longevity increases with age with significant genetic contribution being seen in individuals living beyond 60 years of age. In our present analysis we show that the heritability of longevity is estimated to be increasing with age particularly on the paternal side. Methods: The present analysis integrated both phenotypic and genotypic data and led to the discovery of a series of variants, distinct for stratified populations across ages and distinct for paternal and maternal cohorts. Specifically 5437 subjects were analyzed and a subset of 893 successfully genotyped individuals was used to assess CHIP heritability. We have conducted the homozygosity analysis to examine if homozygosity is associated with increased risk of living beyond 90. We analyzed AMISH cohort genotyped for 614,957 SNPs. Results: We delineated 10 significant regions of homozygosity (ROH) specific for the age group of interest (>90). Of particular interest was ROH on chromosome 13, P < 0.0001. The lead SNPs rs7318486 and rs9645914 point to COL4A2 and our lead SNP. COL25A1 encodes one of the six subunits of type IV collagen, the C-terminal portion of the protein, known as canstatin, is an inhibitor of angiogenesis and tumor growth. COL4A2 mutations have been reported with a broader spectrum of cerebrovascular, renal, ophthalmological, cardiac, and muscular abnormalities. The second region of interest points to IRS2. Furthermore we built a classifier using the obtained SNPs from the significant ROH region with 0.945 AUC giving ability to discriminate between those living beyond to 90 years of age and beyond. Conclusion: In conclusion our results suggest that a history of longevity does indeed contribute to increasing the odds of individual longevity. Preliminary results are consistent with conjecture that heritability of longevity is substantial when we start looking at oldest fifth and smaller percentiles of survival specifically in males. We will validate all the candidate variants in independent cohorts of centenarians, to test whether they are robustly associated with human longevity. The identified regions of interest via ROH analysis could be of profound importance for the understanding of genetic underpinnings of longevity.

Keywords: regions of homozygosity, longevity, SNP, Amish

Procedia PDF Downloads 232
611 Integration of a Protective Film to Enhance the Longevity and Performance of Miniaturized Ion Sensors

Authors: Antonio Ruiz Gonzalez, Kwang-Leong Choy

Abstract:

The measurement of electrolytes has a high value in the clinical routine. Ions are present in all body fluids with variable concentrations and are involved in multiple pathologies such as heart failures and chronic kidney disease. In the case of dissolved potassium, although a high concentration in the blood (hyperkalemia) is relatively uncommon in the general population, it is one of the most frequent acute electrolyte abnormalities. In recent years, the integration of thin films technologies in this field has allowed the development of highly sensitive biosensors with ultra-low limits of detection for the assessment of metals in liquid samples. However, despite the current efforts in the miniaturization of sensitive devices and their integration into portable systems, only a limited number of successful examples used commercially can be found. This fact can be attributed to a high cost involved in their production and the sustained degradation of the electrodes over time, which causes a signal drift in the measurements. Thus, there is an unmet necessity for the development of low-cost and robust sensors for the real-time monitoring of analyte concentrations in patients to allow the early detection and diagnosis of diseases. This paper reports a thin film ion-selective sensor for the evaluation of potassium ions in aqueous samples. As an alternative for this fabrication method, aerosol assisted chemical vapor deposition (AACVD), was applied due to cost-effectivity and fine control over the film deposition. Such a technique does not require vacuum and is suitable for the coating of large surface areas and structures with complex geometries. This approach allowed the fabrication of highly homogeneous surfaces with well-defined microstructures onto 50 nm thin gold layers. The degradative processes of the ubiquitously employed poly (vinyl chloride) membranes in contact with an electrolyte solution were studied, including the polymer leaching process, mechanical desorption of nanoparticles and chemical degradation over time. Rational design of a protective coating based on an organosilicon material in combination with cellulose to improve the long-term stability of the sensors was then carried out, showing an improvement in the performance after 5 weeks. The antifouling properties of such coating were assessed using a cutting-edge quartz microbalance sensor, allowing the quantification of the adsorbed proteins in the nanogram range. A correlation between the microstructural properties of the films with the surface energy and biomolecules adhesion was then found and used to optimize the protective film.

Keywords: hyperkalemia, drift, AACVD, organosilicon

Procedia PDF Downloads 123
610 Combined Treatment with Microneedling and Chemical Peels Improves Periorbital Wrinkles and Skin Laxity

Authors: G. Kontochristopoulos, T. Spiliopoulos, V. Markantoni, E. Platsidaki, A. Kouris, E. Balamoti, C. Bokotas, G. Haidemenos

Abstract:

Introduction: There is a high patient demand for periorbital rejuvenation since the facial area is often the first to show visible signs of aging. With advancing age, there are sometimes marked changes that occur in the skin, fat, muscle and bone of the periorbital region, resulting to wrinkles and skin laxity. These changes are among the easiest areas to correct using several minimally invasive techniques, which have become increasingly popular over the last decade. Lasers, radiofrequency, botulinum toxin, fat grafting and fillers are available treatments sometimes in combination to traditional blepharoplasty. This study attempts to show the benefits of a minimally invasive approach to periorbital wrinkles and skin laxity that combine microneedling and 10% trichloroacetic acid (TCA) peels. Method: Eleven female patients aged 34-72 enrolled in the study. They all gave informed consent after receiving detailed information regarding the treatment procedure. Exclusion criteria in the study were previous treatment for the same condition in the past six months, pregnancy, allergy or hypersensitivity to the components, infection, inflammation and photosensitivity on the affected region. All patients had diffuse periorbital wrinkles and mild to moderate upper or lower eyelid skin laxity. They were treated with Automatic Microneedle Therapy System-Handhold and topical application of 10% trichloroacetic acid solution to each periorbital area for five minutes. Needling at a 0,25 mm depth was performed in both latelar (x-y) directions. Subsequently, the peeling agent was applied to each periorbital area for five minutes. Patients were subjected to the above combination every two weeks for a series of four treatments. Subsequently they were followed up regularly every month for two months. The effect was photo-documented. A Physician's and a Patient's Global Assessment Scale was used to evaluate the efficacy of the treatment (0-25% indicated poor response, 25%-50% fair, 50%-75% good and 75%-100% excellent response). Safety was assessed by monitoring early and delayed adverse events. Results: At the end of the study, almost all patients demonstrated significant aesthetic improvement. Physicians assessed a fair and a good improvement in 9(81.8% of patients) and 2(18.1% of patients) participants respectively. Patients Global Assessment rated a fair and a good response in 6 (54.5%) and 5 (45.4%) participants respectively. The procedure was well tolerated and all patients were satisfied. Mild discomfort and transient erythema were quite common during or immediately after the procedure, however only temporary. During the monthly follow up, no complications or scars were observed. Conclusions: Microneedling is known as a simple, office–based collagen induction therapy. Low concentration TCA solution applied to the epidermis that has been more permeable by microneedling, can reach the dermis more effectively. In the present study, chemical peels with 10% TCA acted as an adjuvant to microneedling, as it causes controlled skin damage, promoting regeneration and rejuvenation of tissues. This combined therapy improved periorbital fine lines, wrinkles, and overall appearance of the skin. Thus it constitutes an alternative treatment of periorbital skin aging, with encouraging results and minor side-effects.

Keywords: chemical peels, microneedling, periorbital wrinkles, skin laxity

Procedia PDF Downloads 354
609 Examination of Porcine Gastric Biomechanics in the Antrum Region

Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard

Abstract:

Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.

Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing

Procedia PDF Downloads 433
608 Lifespan Assessment of the Fish Crossing System of Itaipu Power Plant (Brazil/Paraguay) Based on the Reaching of Its Sedimentological Equilibrium Computed by 3D Modeling and Churchill Trapping Efficiency

Authors: Anderson Braga Mendes, Wallington Felipe de Almeida, Cicero Medeiros da Silva

Abstract:

This study aimed to assess the lifespan of the fish transposition system of the Itaipu Power Plant (Brazil/Paraguay) by using 3D hydrodynamic modeling and Churchill trapping effiency in order to identify the sedimentological equilibrium configuration in the main pond of the Piracema Channel, which is part of a 10 km hydraulic circuit that enables fish migration from downstream to upstream (and vice-versa) the Itaipu Dam, overcoming a 120 m water drop. For that, bottom data from 2002 (its opening year) and 2015 were collected and analyzed, besides bed material at 12 stations to the purpose of identifying their granulometric profiles. The Shields and Yalin and Karahan diagrams for initiation of motion of bed material were used to determine the critical bed shear stress for the sedimentological equilibrium state based on the sort of sediment (grain size) to be found at the bottom once the balance is reached. Such granulometry was inferred by analyzing the grosser material (fine and medium sands) which inflows the pond and deposits in its backwater zone, being adopted a range of diameters within the upper and lower limits of that sand stratification. The software Delft 3D was used in an attempt to compute the bed shear stress at every station under analysis. By modifying the input bathymetry of the main pond of the Piracema Channel so as to the computed bed shear stress at each station fell within the intervals of acceptable critical stresses simultaneously, it was possible to foresee the bed configuration of the main pond when the sedimentological equilibrium is reached. Under such condition, 97% of the whole pond capacity will be silted, and a shallow water course with depths ranging from 0.2 m to 1.5 m will be formed; in 2002, depths ranged from 2 m to 10 m. Out of that water path, the new bottom will be practically flat and covered by a layer of water 0.05 m thick. Thus, in the future the main pond of the Piracema Channel will lack its purpose of providing a resting place for migrating fish species, added to the fact that it may become an insurmountable barrier for medium and large sized specimens. Everything considered, it was estimated that its lifespan, from the year of its opening to the moment of the sedimentological equilibrium configuration, will be approximately 95 years–almost half of the computed lifespan of Itaipu Power Plant itself. However, it is worth mentioning that drawbacks concerning the silting in the main pond will start being noticed much earlier than such time interval owing to the reasons previously mentioned.

Keywords: 3D hydrodynamic modeling, Churchill trapping efficiency, fish crossing system, Itaipu power plant, lifespan, sedimentological equilibrium

Procedia PDF Downloads 233
607 Stainless Steel Degradation by Sulphide Mining

Authors: Aguasanta M. Sarmiento, Jose Miguel Davila, Juan Carlos Fortes, Maria Luisa de la Torre

Abstract:

Acid mine drainage (AMD) is an acidic leachate with high levels of metals and sulphates in solution, which seriously affects the durability and strength of metallic materials used in the construction of structural and mechanical components. This paper presents the results of the evolution over time of the reduction in tensile strength and defects in AISI 304 stainless steel in contact with acid mine drainage. For this purpose, a total of 30 bars with a diameter of 8 mm and a length of 14 cm were placed transversely in the course of a stream contaminated by AMD from the sulphide mines of the Iberian Pyritic Belt (SW Spain). This stream has average pH values of 2.6, a potential of 660 mV and average concentrations of 12 g/L of sulphates, 1.2 g/L of Fe, 191 mg/L of Zn, etc. Every two months of exposure, 6 stainless steel bars were extracted from the acid stream. They were subjected to surface roughness analysis carried out with the help of Mitutoyo Surftest SJ-210 surface roughness tester. The analysis was carried out at three different points on 5 specimens from each series. The average reading of each parameter is calculated in order to ensure the accuracy of the measurements and the surface coverage. Arithmetic mean roughness value (Ra), mean roughness depth (Rz) and root mean square roughness (Rq) were measured. Five specimens from each series were statically tensile tested using universal equipment (Servosis ME 403 of 200kN). The specimens were clamped at their ends with two grips for cylindrical sections, and the tensile force was applied at a constant speed of 0.5 kN/s, according to the requirements of standard UNE-EN ISO 6892-1: 2020. To determine the modulus of elasticity, limits close to 15% and 55% of the maximum load were used, depending on the course of each test. Field Emission Scanning Electron Microscopy (FESEM) was used to observe corrosion products and defects generated by exposure to AMD. Energy dispersive X-ray spectrometry (EDS) was used to analyze the chemical composition of the corrosion products formed. For this purpose, small pieces were cut from the resulting specimens, cleaned and embedded in epoxy resin. The results show that after only 5 months of exposure of AISI 304 stainless steel to the mining environment, the surface roughness increases significantly, with average depths almost 6 times greater than the initial one. Cracks are observed on the surface of the material, which increases in size with the time of exposure. A large number of grains with a composition of more than 57% Pb and 16% Sn can be observed inside these cracks. Tensile tests show a reduction in the resistance of this material after only two months of exposure. The results show the serious problems that would result from the use of this material for the use of mechanical components in a sulphide mining environment, not only because of the significant reduction in the lifetime of such components but also because of the implications for human safety.

Keywords: Acid mine drainage, Corrosion, Mechanical properties, Stainless steel

Procedia PDF Downloads 9
606 Fractional, Component and Morphological Composition of Ambient Air Dust in the Areas of Mining Industry

Authors: S.V. Kleyn, S.Yu. Zagorodnov, А.А. Kokoulina

Abstract:

Technogenic emissions of the mining and processing complex are characterized by a high content of chemical components and solid dust particles. However, each industrial enterprise and the surrounding area have features that require refinement and parameterization. Numerous studies have shown the negative impact of fine dust PM10 and PM2.5 on the health, as well as the possibility of toxic components absorption, including heavy metals by dust particles. The target of the study was the quantitative assessment of the fractional and particle size composition of ambient air dust in the area of impact by primary magnesium production complex. Also, we tried to describe the morphology features of dust particles. Study methods. To identify the dust emission sources, the analysis of the production process has been carried out. The particulate composition of the emissions was measured using laser particle analyzer Microtrac S3500 (covered range of particle size is 20 nm to 2000 km). Particle morphology and the component composition were established by electron microscopy by scanning microscope of high resolution (magnification rate - 5 to 300 000 times) with X-ray fluorescence device S3400N ‘HITACHI’. The chemical composition was identified by X-ray analysis of the samples using an X-ray diffractometer XRD-700 ‘Shimadzu’. Determination of the dust pollution level was carried out using model calculations of emissions in the atmosphere dispersion. The calculations were verified by instrumental studies. Results of the study. The results demonstrated that the dust emissions of different technical processes are heterogeneous and fractional structure is complicated. The percentage of particle sizes up to 2.5 micrometres inclusive was ranged from 0.00 to 56.70%; particle sizes less than 10 microns inclusive – 0.00 - 85.60%; particle sizes greater than 10 microns - 14.40% -100.00%. During microscopy, the presence of nanoscale size particles has been detected. Studied dust particles are round, irregular, cubic and integral shapes. The composition of the dust includes magnesium, sodium, potassium, calcium, iron, chlorine. On the base of obtained results, it was performed the model calculations of dust emissions dispersion and establishment of the areas of fine dust РМ 10 and РМ 2.5 distribution. It was found that the dust emissions of fine powder fractions PM10 and PM2.5 are dispersed over large distances and beyond the border of the industrial site of the enterprise. The population living near the enterprise is exposed to the risk of diseases associated with dust exposure. Data are transferred to the economic entity to make decisions on the measures to minimize the risks. Exposure and risks indicators on the health are used to provide named patient health and preventive care to the citizens living in the area of negative impact of the facility.

Keywords: dust emissions, еxposure assessment, PM 10, PM 2.5

Procedia PDF Downloads 261
605 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 63
604 qPCR Method for Detection of Halal Food Adulteration

Authors: Gabriela Borilova, Monika Petrakova, Petr Kralik

Abstract:

Nowadays, European producers are increasingly interested in the production of halal meat products. Halal meat has been increasingly appearing in the EU's market network and meat products from European producers are being exported to Islamic countries. Halal criteria are mainly related to the origin of muscle used in production, and also to the way products are obtained and processed. Although the EU has legislatively addressed the question of food authenticity, the circumstances of previous years when products with undeclared horse or poultry meat content appeared on EU markets raised the question of the effectiveness of control mechanisms. Replacement of expensive or not-available types of meat for low-priced meat has been on a global scale for a long time. Likewise, halal products may be contaminated (falsified) by pork or food components obtained from pigs. These components include collagen, offal, pork fat, mechanically separated pork, emulsifier, blood, dried blood, dried blood plasma, gelatin, and others. These substances can influence sensory properties of the meat products - color, aroma, flavor, consistency and texture or they are added for preservation and stabilization. Food manufacturers sometimes access these substances mainly due to their dense availability and low prices. However, the use of these substances is not always declared on the product packaging. Verification of the presence of declared ingredients, including the detection of undeclared ingredients, are among the basic control procedures for determining the authenticity of food. Molecular biology methods, based on DNA analysis, offer rapid and sensitive testing. The PCR method and its modification can be successfully used to identify animal species in single- and multi-ingredient raw and processed foods and qPCR is the first choice for food analysis. Like all PCR-based methods, it is simple to implement and its greatest advantage is the absence of post-PCR visualization by electrophoresis. qPCR allows detection of trace amounts of nucleic acids, and by comparing an unknown sample with a calibration curve, it can also provide information on the absolute quantity of individual components in the sample. Our study addresses a problem that is related to the fact that the molecular biological approach of most of the work associated with the identification and quantification of animal species is based on the construction of specific primers amplifying the selected section of the mitochondrial genome. In addition, the sections amplified in conventional PCR are relatively long (hundreds of bp) and unsuitable for use in qPCR, because in DNA fragmentation, amplification of long target sequences is quite limited. Our study focuses on finding a suitable genomic DNA target and optimizing qPCR to reduce variability and distortion of results, which is necessary for the correct interpretation of quantification results. In halal products, the impact of falsification of meat products by the addition of components derived from pigs is all the greater that it is not just about the economic aspect but above all about the religious and social aspect. This work was supported by the Ministry of Agriculture of the Czech Republic (QJ1530107).

Keywords: food fraud, halal food, pork, qPCR

Procedia PDF Downloads 247
603 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 120
602 Expanding Access and Deepening Engagement: Building an Open Source Digital Platform for Restoration-Based Stem Education in the Largest Public-School System in the United States

Authors: Lauren B. Birney

Abstract:

This project focuses upon the expansion of the existing "Curriculum and Community Enterprise for the Restoration of New York Harbor in New York City Public Schools" NSF EHR DRL 1440869, NSF EHR DRL 1839656 and NSF EHR DRL 1759006. This project is recognized locally as “Curriculum and Community Enterprise for Restoration Science,” or CCERS. CCERS is a comprehensive model of ecological restoration-based STEM education for urban public-school students. Following an accelerated rollout, CCERS is now being implemented in 120+ Title 1 funded NYC Department of Education middle schools, led by two cohorts of 250 teachers, serving more than 11,000 students in total. Initial results and baseline data suggest that the CCERS model, with the Billion Oyster Project (BOP) as its local restoration ecology-based STEM curriculum, is having profound impacts on students, teachers, school leaders, and the broader community of CCERS participants and stakeholders. Students and teachers report being receptive to the CCERS model and deeply engaged in the initial phase of curriculum development, citizen science data collection, and student-centered, problem-based STEM learning. The BOP CCERS Digital Platform will serve as the central technology hub for all research, data, data analysis, resources, materials and student data to promote global interactions between communities, Research conducted included qualitative and quantitative data analysis. We continue to work internally on making edits and changes to accommodate a dynamic society. The STEM Collaboratory NYC® at Pace University New York City continues to act as the prime institution for the BOP CCERS project since the project’s inception in 2014. The project continues to strive to provide opportunities in STEM for underrepresented and underserved populations in New York City. The replicable model serves as an opportunity for other entities to create this type of collaboration within their own communities and ignite a community to come together and address the notable issue. Providing opportunities for young students to engage in community initiatives allows for a more cohesive set of stakeholders, ability for young people to network and provide additional resources for those students in need of additional support, resources and structure. The project has planted more than 47 million oysters across 12 acres and 15 reef sites, with the help of more than 8,000 students and 10,000 volunteers. Additional enhancements and features on the BOP CCERS Digital Platform will continue over the next three years through funding provided by the National Science Foundation, NSF DRL EHR 1759006/1839656 Principal Investigator Dr. Lauren Birney, Professor Pace University. Early results from the data indicate that the new version of the Platform is creating traction both nationally and internationally among community stakeholders and constituents. This project continues to focus on new collaborative partners that will support underrepresented students in STEM Education. The advanced Digital Platform will allow for us connect with other countries and networks on a larger Global scale.

Keywords: STEM education, environmental restoration science, technology, citizen science

Procedia PDF Downloads 86
601 Single Stage Holistic Interventions: The Impact on Well-Being

Authors: L. Matthewman, J. Nowlan

Abstract:

Background: Holistic or Integrative Psychology emphasizes the interdependence of physiological, spiritual and psychological dynamics. Studying “wholeness and well-being” from a systems perspective combines innovative psychological science interventions with Eastern orientated healing wisdoms and therapies. The literature surrounding holistic/integrative psychology focuses on multi-stage interventions in attempts to enhance the mind-body experiences of well-being for participants. This study proposes a new single stage model as an intervention for UG/PG students, time-constrained workplace employees and managers/leaders for improved well-being and life enhancement. The main research objective was to investigate participants’ experiences of holistic and mindfulness interventions for impact on emotional well-being. The main research question asked was if single stage holistic interventions could impact on psychological well-being. This is of consequence because many people report that a reason for not taking part in mind-body or wellness programmes is that they believe that they do not have sufficient time to engage in such pursuits. Experimental Approach: The study employed a mixed methods pre-test/post-test research design. Data was analyzed using descriptive statistics and interpretative phenomenological analysis. Purposive sampling methods were employed. An adapted mindfulness measurement questionnaire (MAAS) was administered to 20 volunteer final year UG student participants prior to the single stage intervention and following the intervention. A further post-test longitudinal follow-up took place one week later. Intervention: The single stage model intervention consisted of a half hour session of mindfulness, yoga stretches and head and neck massage in the following sequence: Mindful awareness of the breath, yoga stretches 1, mindfulness of the body, head and neck massage, mindfulness of sounds, yoga stretches 2 and finished with pure awareness mindfulness. Results: The findings on the pre-test indicated key themes concerning: “being largely unaware of feelings”, “overwhelmed with final year exams”, “juggling other priorities” , “not feeling in control”, “stress” and “negative emotional display episodes”. Themes indicated on the post-test included: ‘more aware of self’, ‘in more control’, ‘immediately more alive’ and ‘just happier’ compared to the pre-test. Themes from post-test 2 indicated similar findings to post-test 1 in terms of themes. but on a lesser scale when scored for intensity. Interestingly, the majority of participants reported that they would now seek other similar interventions in the future and would be likely to engage with a multi-stage intervention type on a longer-term basis. Overall, participants reported increased psychological well-being after the single stage intervention. Conclusion: A single stage one-off intervention model can be effective to help towards the wellbeing of final year UG students. There is little indication to suggest that this would not be generalizable to others in different areas of life and business. However this study must be taken with caution due to low participant numbers. Implications: Single stage one-off interventions can be used to enhance peoples’ lives who might not otherwise sign up for a longer multi-stage intervention. In addition, single stage interventions can be utilized to help participants progress onto longer multiple stage interventions. Finally, further research into one stage well-being interventions is encouraged.

Keywords: holistic/integrative psychology, mindfulness, well-being, yoga

Procedia PDF Downloads 353
600 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities

Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho

Abstract:

In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.

Keywords: on-site stormwater detention, source control, stormwater, urban drainage

Procedia PDF Downloads 180
599 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year

Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans

Abstract:

Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.

Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality

Procedia PDF Downloads 310
598 Predicting OpenStreetMap Coverage by Means of Remote Sensing: The Case of Haiti

Authors: Ran Goldblatt, Nicholas Jones, Jennifer Mannix, Brad Bottoms

Abstract:

Accurate, complete, and up-to-date geospatial information is the foundation of successful disaster management. When the 2010 Haiti Earthquake struck, accurate and timely information on the distribution of critical infrastructure was essential for the disaster response community for effective search and rescue operations. Existing geospatial datasets such as Google Maps did not have comprehensive coverage of these features. In the days following the earthquake, many organizations released high-resolution satellite imagery, catalyzing a worldwide effort to map Haiti and support the recovery operations. Of these organizations, OpenStreetMap (OSM), a collaborative project to create a free editable map of the world, used the imagery to support volunteers to digitize roads, buildings, and other features, creating the most detailed map of Haiti in existence in just a few weeks. However, large portions of the island are still not fully covered by OSM. There is an increasing need for a tool to automatically identify which areas in Haiti, as well as in other countries vulnerable to disasters, that are not fully mapped. The objective of this project is to leverage different types of remote sensing measurements, together with machine learning approaches, in order to identify geographical areas where OSM coverage of building footprints is incomplete. Several remote sensing measures and derived products were assessed as potential predictors of OSM building footprints coverage, including: intensity of light emitted at night (based on VIIRS measurements), spectral indices derived from Sentinel-2 satellite (normalized difference vegetation index (NDVI), normalized difference built-up index (NDBI), soil-adjusted vegetation index (SAVI), urban index (UI)), surface texture (based on Sentinel-1 SAR measurements)), elevation and slope. Additional remote sensing derived products, such as Hansen Global Forest Change, DLR`s Global Urban Footprint (GUF), and World Settlement Footprint (WSF), were also evaluated as predictors, as well as OSM street and road network (including junctions). Using a supervised classification with a random forest classifier resulted in the prediction of 89% of the variation of OSM building footprint area in a given cell. These predictions allowed for the identification of cells that are predicted to be covered but are actually not mapped yet. With these results, this methodology could be adapted to any location to assist with preparing for future disastrous events and assure that essential geospatial information is available to support the response and recovery efforts during and following major disasters.

Keywords: disaster management, Haiti, machine learning, OpenStreetMap, remote sensing

Procedia PDF Downloads 125
597 Subcontractor Development Practices and Processes: A Conceptual Model for LEED Projects

Authors: Andrea N. Ofori-Boadu

Abstract:

The purpose is to develop a conceptual model of subcontractor development practices and processes that strengthen the integration of subcontractors into construction supply chain systems for improved subcontractor performance on Leadership in Energy and Environmental Design (LEED) certified building projects. The construction management of a LEED project has an important objective of meeting sustainability certification requirements. This is in addition to the typical project management objectives of cost, time, quality, and safety for traditional projects; and, therefore increases the complexity of LEED projects. Considering that construction management organizations rely heavily on subcontractors, poor performance on complex projects such as LEED projects has been largely attributed to the unsatisfactory preparation of subcontractors. Furthermore, the extensive use of unique and non-repetitive short term contracts limits the full integration of subcontractors into construction supply chains and hinders long-term cooperation and benefits that could enhance performance on construction projects. Improved subcontractor development practices are needed to better prepare and manage subcontractors, so that complex objectives can be met or exceeded. While supplier development and supply chain theories and practices for the manufacturing sector have been extensively investigated to address similar challenges, investigations in the construction sector are not that obvious. Consequently, the objective of this research is to investigate effective subcontractor development practices and processes to guide construction management organizations in their development of a strong network of high performing subcontractors. Drawing from foundational supply chain and supplier development theories in the manufacturing sector, a mixed interpretivist and empirical methodology is utilized to assess the body of knowledge within literature for conceptual model development. A self-reporting survey with five-point Likert scale items and open-ended questions is administered to 30 construction professionals to estimate their perceptions of the effectiveness of 37 practices, classified into five subcontractor development categories. Data analysis includes descriptive statistics, weighted means, and t-tests that guide the effectiveness ranking of practices and categories. The results inform the proposed three-phased LEED subcontractor development program model which focuses on preparation, development and implementation, and monitoring. Highly ranked LEED subcontractor pre-qualification, commitment, incentives, evaluation, and feedback practices are perceived as more effective, when compared to practices requiring more direct involvement and linkages between subcontractors and construction management organizations. This is attributed to unfamiliarity, conflicting interests, lack of trust, and resource sharing challenges. With strategic modifications, the recommended practices can be extended to other non-LEED complex projects. Additional research is needed to guide the development of subcontractor development programs that strengthen direct involvement between construction management organizations and their network of high performing subcontractors. Insights from this present research strengthen theoretical foundations to support future research towards more integrated construction supply chains. In the long-term, this would lead to increased performance, profits and client satisfaction.

Keywords: construction management, general contractor, supply chain, sustainable construction

Procedia PDF Downloads 110
596 The Digital Divide: Examining the Use and Access to E-Health Based Technologies by Millennials and Older Adults

Authors: Delana Theiventhiran, Wally J. Bartfay

Abstract:

Background and Significance: As the Internet is becoming the epitome of modern communications, there are many pragmatic reasons why the digital divide matters in terms of accessing and using E-health based technologies. With the rise of technology usage globally, those in the older adult generation may not be as familiar and comfortable with technology usage and are thus put at a disadvantage compared to other generations such as millennials when examining and using E-health based platforms and technology. Currently, little is known about how older adults and millennials access and use e-health based technologies. Methods: A systemic review of the literature was undertaken employing the following three databases: (i) PubMed, (ii) ERIC, and (iii) CINAHL; employing the search term 'digital divide and generations' to identify potential articles. To extract required data from the studies, a data abstraction tool was created to obtain the following information: (a) author, (b) year of publication, (c) sample size, (d) country of origin, (e) design/methods, (f) major findings/outcomes obtained. Inclusion criteria included publication dates between the years of Jan 2009 to Aug 2018, written in the English language, target populations of older adults aged 65 and above and millennials, and peer reviewed quantitative studies only. Major Findings: PubMed provided 505 potential articles, where 23 of those articles met the inclusion criteria. Specifically, ERIC provided 53 potential articles, where no articles met criteria following data extraction. CINAHL provided 14 potential articles, where eight articles met criteria following data extraction. Conclusion: Practically speaking, identifying how newer E-health based technologies can be integrated into society and identifying why there is a gap with digital technology will help reduce the impact on generations and individuals who are not as familiar with technology and Internet usage. The largest concern of all is how to prepare older adults for new and emerging E-health technologies. Currently, there is a dearth of literature in this area because it is a newer area of research and little is known about it. The benefits and consequences of technology being integrated into daily living are being investigated as a newer area of research. Several of the articles (N=11) indicated that age is one of the larger factors contributing to the digital divide. Similarly, many of the examined articles (N=5) identify that privacy concerns were one of the main deterrents of technology usage for elderly individuals aged 65 and above. The older adult generation feels that privacy is one of the major concerns, especially in regards to how data is collected, used and possibly sold to third party groups by various websites. Additionally, access to technology, the Internet, and infrastructure also plays a large part in the way that individuals are able to receive and use information. Lastly, a change in the way that healthcare is currently used, received and distributed would also help attribute to the change to ensure that no generation is left behind in a technologically advanced society.

Keywords: digital divide, e-health, millennials, older adults

Procedia PDF Downloads 172
595 Optical and Structural Characterization of Rare Earth Doped Phosphate Glasses

Authors: Zélia Maria Da Costa Ludwig, Maria José Valenzuela Bell, Geraldo Henriques Da Silva, Thales Alves Faraco, Victor Rocha Da Silva, Daniel Rotmeister Teixeira, Vírgilio De Carvalho Dos Anjos, Valdemir Ludwig

Abstract:

Advances in telecommunications grow with the development of optical amplifiers based on rare earth ions. The focus has been concentrated in silicate glasses although their amplified spontaneous emission is limited to a few tens of nanometers (~ 40nm). Recently, phosphate glasses have received great attention due to their potential application in optical data transmission, detection, sensors and laser detector, waveguide and optical fibers, besides its excellent physical properties such as high thermal expansion coefficients and low melting temperature. Compared with the silica glasses, phosphate glasses provide different optical properties such as, large transmission window of infrared, and good density. Research on the improvement of physical and chemical durability of phosphate glass by addition of heavy metals oxides in P2O5 has been performed. The addition of Na2O further improves the solubility of rare earths, while increasing the Al2O3 links in the P2O5 tetrahedral results in increased durability and aqueous transition temperature and a decrease of the coefficient of thermal expansion. This work describes the structural and spectroscopic characterization of a phosphate glass matrix doped with different Er (Erbium) concentrations. The phosphate glasses containing Er3+ ions have been prepared by melt technique. A study of the optical absorption, luminescence and lifetime was conducted in order to characterize the infrared emission of Er3+ ions at 1540 nm, due to the radiative transition 4I13/2 → 4I15/2. Our results indicate that the present glass is a quite good matrix for Er3+ ions, and the quantum efficiency of the 1540 nm emission was high. A quenching mechanism for the mentioned luminescence was not observed up to 2,0 mol% of Er concentration. The Judd-Ofelt parameters, radiative lifetime and quantum efficiency have been determined in order to evaluate the potential of Er3+ ions in new phosphate glass. The parameters follow the trend as Ω2 > Ω4 > Ω6. It is well known that the parameter Ω2 is an indication of the dominant covalent nature and/or structural changes in the vicinity of the ion (short range effects), while Ω4 and Ω6 intensity parameters are long range parameters that can be related to the bulk properties such as viscosity and rigidity of the glass. From the PL measurements, no red or green upconversion was measured when pumping the samples with laser excitation at 980 nm. As future prospects: Synthesize this glass system with silver in order to determine the influence of silver nanoparticles on the Er3+ ions.

Keywords: phosphate glass, erbium, luminescence, glass system

Procedia PDF Downloads 510
594 The Effect of the Performance Evolution System on the Productivity of Administrating and a Case Study

Authors: Ertuğrul Ferhat Yilmaz, Ali Riza Perçin

Abstract:

In the business enterprises implemented modern business enterprise principles, the most important issues are increasing the performance of workers and getting maximum income. Through the twentieth century, rapid development of the sectors of data processing and communication and because of the free trade politics arising of multilateral business enterprises have canceled the economical borders and changed the local rivalry into the spherical rivalry. In this rivalry conditions, the business enterprises have to work active and productive in order to continue their existences. The employees worked at business enterprises have formed the most important factor of product. Therefore, the business enterprises inferring the importance of the human factors in order to increase the profit have used “the performance evolution system” to increase the success and development of the employees. The evolution of the performance is aimed to increase the manpower productive by using the employees in an active way. Furthermore, this system assists the wage politics implemented in business enterprise, determining the strategically plans in business enterprises through the short and long terms, being promoted and determining the educational needs of employees, making decisions as dismissing and work rotation. It requires a great deal of effort to catch the pace of change in the working realm and to keep up ourselves up-to-date. To get the quality in people,to have an effect in workplace depends largely on the knowledge and competence of managers and prospective managers. Therefore,managers need to use the performance evaluation systems in order to base their managerial decisions on sound data. This study aims at finding whether the organizations effectively use performance evaluation systms,how much importance is put on this issue and how much the results of the evaulations have an effect on employees. Whether the organizations have the advantage of competition and can keep on their activities depend to a large extent on how they effectively and efficiently use their employees.Therefore,it is of vital importance to evaluate employees' performance and to make them better according to the results of that evaluation. The performance evaluation system which evaluates the employees according to the criteria related to that organization has become one of the most important topics for management. By means of those important ends mentioned above,performance evaluation system seems to be a tool that can be used to improve the efficiency and effectiveness of organization. Because of its contribution to organizational success, thinking performance evaluation on the axis of efficiency shows the importance of this study on a different angle. In this study, we have explained performance evaluation system ,efficiency and the relation between those two concepts. We have also analyzed the results of questionnaires conducted on the textile workers in Edirne city.We have got positive answers from the questions about the effects of performance evaluation on efficiency.After factor analysis ,the efficiency and motivation which are determined as factors of performance evaluation system have the biggest variance (%19.703) in our sample. Thus, this study shows that objective performance evaluation increases the efficiency and motivation of employees.

Keywords: performance, performance evolution system, productivity, Edirne region

Procedia PDF Downloads 303
593 Design Flood Estimation in Satluj Basin-Challenges for Sunni Dam Hydro Electric Project, Himachal Pradesh-India

Authors: Navneet Kalia, Lalit Mohan Verma, Vinay Guleria

Abstract:

Introduction: Design Flood studies are essential for effective planning and functioning of water resource projects. Design flood estimation for Sunni Dam Hydro Electric Project located in State of Himachal Pradesh, India, on the river Satluj, was a big challenge in view of the river flowing in the Himalayan region from Tibet to India, having a large catchment area of varying topography, climate, and vegetation. No Discharge data was available for the part of the river in Tibet, whereas, for India, it was available only at Khab, Rampur, and Luhri. The estimation of Design Flood using standard methods was not possible. This challenge was met using two different approaches for upper (snow-fed) and lower (rainfed) catchment using Flood Frequency Approach and Hydro-metrological approach. i) For catchment up to Khab Gauging site (Sub-Catchment, C1), Flood Frequency approach was used. Around 90% of the catchment area (46300 sqkm) up to Khab is snow-fed which lies above 4200m. In view of the predominant area being snow-fed area, 1 in 10000 years return period flood estimated using Flood Frequency analysis at Khab was considered as Probable Maximum Flood (PMF). The flood peaks were taken from daily observed discharges at Khab, which were increased by 10% to make them instantaneous. Design Flood of 4184 cumec thus obtained was considered as PMF at Khab. ii) For catchment between Khab and Sunni Dam (Sub-Catchment, C2), Hydro-metrological approach was used. This method is based upon the catchment response to the rainfall pattern observed (Probable Maximum Precipitation - PMP) in a particular catchment area. The design flood computation mainly involves the estimation of a design storm hyetograph and derivation of the catchment response function. A unit hydrograph is assumed to represent the response of the entire catchment area to a unit rainfall. The main advantage of the hydro-metrological approach is that it gives a complete flood hydrograph which allows us to make a realistic determination of its moderation effect while passing through a reservoir or a river reach. These studies were carried out to derive PMF for the catchment area between Khab and Sunni Dam site using a 1-day and 2-day PMP values of 232 and 416 cm respectively. The PMF so obtained was 12920.60 cumec. Final Result: As the Catchment area up to Sunni Dam has been divided into 2 sub-catchments, the Flood Hydrograph for the Catchment C1 has been routed through the connecting channel reach (River Satluj) using Muskingum method and accordingly, the Design Flood was computed after adding the routed flood ordinates with flood ordinates of catchment C2. The total Design Flood (i.e. 2-Day PMF) with a peak of 15473 cumec was obtained. Conclusion: Even though, several factors are relevant while deciding the method to be used for design flood estimation, data availability and the purpose of study are the most important factors. Since, generally, we cannot wait for the hydrological data of adequate quality and quantity to be available, flood estimation has to be done using whatever data is available. Depending upon the type of data available for a particular catchment, the method to be used is to be selected.

Keywords: design flood, design storm, flood frequency, PMF, PMP, unit hydrograph

Procedia PDF Downloads 327
592 Scenario-Based Learning Using Virtual Optometrist Applications

Authors: J. S. M. Yang, G. E. T. Chua

Abstract:

Diploma in Optometry (OPT) course is a three-year program offered by Ngee Ann Polytechnic (NP) to train students to provide primary eye care. Students are equipped with foundational conceptual knowledge and practical skills in the first three semesters before clinical modules in fourth to six semesters. In the clinical modules, students typically have difficulties in integrating the acquired knowledge and skills from the past semesters to perform general eye examinations on public patients at NP Optometry Centre (NPOC). To help the students overcome the challenge, a web-based game Virtual Optometrist (VO) was developed to help students apply their skills and knowledge through scenario-based learning. It consisted of two interfaces, Optical Practice Counter (OPC) and Optometric Consultation Room (OCR), to provide two simulated settings for authentic learning experiences. In OPC, students would recommend and provide appropriate frame and lens selection based on virtual patient’s case history. In OCR, students would diagnose and manage virtual patients with common ocular conditions. Simulated scenarios provided real-world clinical situations that required contextual application of integrated knowledge from relevant modules. The stages in OPC and OCR are of increasing complexity to align to expected students’ clinical competency as they progress to more senior semesters. This prevented gameplay fatigue as VO was used over the semesters to achieve different learning outcomes. Numerous feedback opportunities were provided to students based on their decisions to allow individualized learning to take place. The game-based learning element in VO was achieved through the scoreboard and leader board to enhance students' motivation to perform. Scores were based on the speed and accuracy of students’ responses to the questions posed in the simulated scenarios, preparing the students to perform accurately and effectively under time pressure in a realistic optometric environment. Learning analytics was generated in VO’s backend office based on students’ responses, offering real-time data on distinctive and observable learners’ behavior to monitor students’ engagement and learning progress. The backend office allowed versatility to add, edit, and delete scenarios for different intended learning outcomes. Likert Scale was used to measure students’ learning experience with VO for OPT Year 2 and 3 students. The survey results highlighted the learning benefits of implementing VO in the different modules, such as enhancing recall and reinforcement of clinical knowledge for contextual application to develop higher-order thinking skills, increasing efficiency in clinical decision-making, facilitating learning through immediate feedback and second attempts, providing exposure to common and significant ocular conditions, and training effective communication skills. The results showed that VO has been useful in reinforcing optometry students’ learning and supporting the development of higher-order thinking, increasing efficiency in clinical decision-making, and allowing students to learn from their mistakes with immediate feedback and second attempts. VO also exposed the students to diverse ocular conditions through simulated real-world clinical scenarios, which may otherwise not be encountered in NPOC, and promoted effective communication skills.

Keywords: authentic learning, game-based learning, scenario-based learning, simulated clinical scenarios

Procedia PDF Downloads 117
591 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 224
590 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements

Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang

Abstract:

Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.

Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure

Procedia PDF Downloads 112
589 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 93
588 SME Internationalisation and Its Financing: An Exploratory Study That Analyses Government Support and Funding Mechanisms for Irish and Scottish International SMEs

Authors: L. Spencer, S. O’ Donohoe

Abstract:

Much of the research to date on internationalisation relates to large firms with much less known about how small and medium-sized enterprises (SMEs) engage in internationalisation. Given the crucial role of SMEs in contributing to economic growth, there is now an emphasis on the need for SMEs internationalise. Yet little is known about how SMEs undertake and finance such expansion and whether or not internationalisation actually hinders or helps them in securing finance. The purpose of this research is to explore the internationalisation process for SMEs, the sources of funding used in financing this expansion and support received from the state agencies in assisting their overseas expansion. A conceptual framework has been devised which marries the two strands of literature together (internationalisation and financing the firm). The exploratory nature of this research dictates that the most appropriate methodology was to use semi-structured interviews with SME owners; bank representatives and support agencies. In essence, a triangulated approach to the research problem facilitates assessment of the perceptions and experiences from firms, the state and the financial institutions. Our sample is drawn from SMEs operating in Ireland and Scotland, two small but very open economies where SMEs are the dominant form of organisation. The sample includes a range of industry sectors. Key findings to date suggest some SMEs are born global; others are born again global whilst a significant cohort can be classed as traditional internationalisers. Unsurprisingly there is a strong industry effect with firms in the high tech sector more likely to be faster internationalisers in contrast to those in the traditional manufacturing sectors. Owner manager’s own funds are deemed key to financing initial internationalisation lending support for the financial growth life cycle model albeit more important for the faster internationalisers in contrast to the slower cohort who are more likely to deploy external sources especially bank finance. Retained earnings remain the predominant source of on-going financing for internationalising firms but trade credit is often used and invoice discounting is utilised quite frequently. In terms of lending, asset based lending backed by personal guarantees appears paramount for securing bank finance. Whilst the lack of diversified sources of funding for internationalising SMEs was found in both jurisdictions there appears no evidence to suggest that internationalisation impedes firms in securing finance. Finally state supports were cited as important to the internationalisation process, in particular those provided by Enterprise Ireland were deemed very valuable. Considering the paucity of studies to date on SME internationalisation and in particular the funding mechanisms deployed by them; this study seeks to contribute to the body of knowledge in both the international business and finance disciplines.

Keywords: funding, government support, international pathways, modes of entry

Procedia PDF Downloads 245
587 Pediatric Drug Resistance Tuberculosis Pattern, Side Effect Profile and Treatment Outcome: North India Experience

Authors: Sarika Gupta, Harshika Khanna, Ajay K Verma, Surya Kant

Abstract:

Background: Drug-resistant tuberculosis (DR-TB) is a growing health challenge to global TB control efforts. Pediatric DR-TB is one of the neglected infectious diseases. In our previously published report, we have notified an increased prevalence of DR-TB in the pediatric population at a tertiary health care centre in North India which was estimated as 17.4%, 15.1%, 18.4%, and 20.3% in (%) in the year 2018, 2019, 2020, and 2021. Limited evidence exists about a pattern of drug resistance, side effect profile and programmatic outcomes of Paediatric DR-TB treatment. Therefore, this study was done to find out the pattern of resistance, side effect profile and treatment outcome. Methodology: This was a prospective cohort study conducted at the nodal drug-resistant tuberculosis centre of a tertiary care hospital in North India from January 2021 to December 2022. Subjects included children aged between 0-18 years of age with a diagnosis of DR-TB, on the basis of GeneXpert (rifampicin [RIF] resistance detected), line probe assay and drug sensitivity testing (DST) of M. tuberculosis (MTB) grown on a culture of body fluids. Children were classified as monoresistant TB, polyresistant TB (resistance to more than 1 first-line anti-TB drug, other than both INH and RIF), MDR-TB, pre-XDR-TB and XDR-TB, as per the WHO classification. All the patients were prescribed DR TB treatment as per the standard guidelines, either shorter oral DR-TB regimen or a longer all-oral MDR/XDR-TB regimen (age below five years needed modification). All the patients were followed up for side effects of treatment once per month. The patient outcomes were categorized as good outcomes if they had completed treatment and cured or were improving during the course of treatment, while bad outcomes included death or not improving during the course of treatment. Results: Of the 50 pediatric patients included in the study, 34 were females (66.7%) and 16 were male (31.4%). Around 33 patients (64.7%) were suffering from pulmonary TB, while 17 (33.3%) were suffering from extrapulmonary TB. The proportions of monoresistant TB, polyresistant TB, MDR-TB, pre-XDR-TB and XDR-TB were 2.0%, 0%, 50.0%, 30.0% and 18.0%, respectively. Good outcome was reported in 40 patients (80.0%). The 10 bad outcomes were 7 deaths (14%) and 3 (6.0%) children who were not improving. Adverse events (single or multiple) were reported in all the patients, most of which were mild in nature. The most common adverse events were metallic taste 16(31.4%), rash and allergic reaction 15(29.4%), nausea and vomiting 13(26.0%), arthralgia 11 (21.6%) and alopecia 11 (21.6%). Serious adverse event of QTc prolongation was reported in 4 cases (7.8%), but neither arrhythmias nor symptomatic cardiac side effects occurred. Vestibular toxicity was reported in 2(3.9%), and psychotic symptoms in 4(7.8%). Hepatotoxicity, hypothyroidism, peripheral neuropathy, gynaecomastia, and amenorrhea were reported in 2 (4.0%), 4 (7.8%), 2 (3.9%), 1(2.0%), and 2 (3.9%) respectively. None of the drugs needed to be withdrawn due to uncontrolled adverse events. Conclusion: Paediatric DR TB treatment achieved favorable outcomes in a large proportion of children. DR TB treatment regimen drugs were overall well tolerated in this cohort.

Keywords: pediatric, drug-resistant, tuberculosis, adverse events, treatment

Procedia PDF Downloads 66