Search results for: steel honeycomb sandwich panel
256 A Delphi Study to Build Consensus for Tuberculosis Control Guideline to Achieve Who End Tb 2035 Strategy
Authors: Pui Hong Chung, Cyrus Leung, Jun Li, Kin On Kwok, Ek Yeoh
Abstract:
Introduction: Studies for TB control in intermediate tuberculosis burden countries (IBCs) comprise a relatively small proportion in TB control literature, as compared to the effort put in high and low burden counterparts. It currently lacks of consensus in the optimal weapons and strategies we can use to combat TB in IBCs; guidelines of TB control are inadequate and thus posing a great obstacle in eliminating TB in these countries. To fill-in the research and services gap, we need to summarize the findings of the effort in this regard and to seek consensus in terms of policy making for TB control, we have devised a series of scoping and Delphi studies for these purposes. Method: The scoping and Delphi studies are conducted in parallel to feed information for each other. Before the Delphi iterations, we have invited three local experts in TB control in Hong Kong to participate in the pre-assessment round of the Delphi study to comments on the validity, relevance, and clarity of the Delphi questionnaire. Result: Two scoping studies, regarding LTBI control in health care workers in IBCs and TB control in elderly of IBCs respectively, have been conducted. The result of these two studies is used as the foundation for developing the Delphi questionnaire, which tapped on seven areas of question, namely: characteristics of IBCs, adequacy of research and services in LTBI control in IBCs, importance and feasibility of interventions for TB control and prevention in hospital, screening and treatment of LTBI in community, reasons of refusal to/ default from LTBI treatment, medical adherence of LTBI treatment, and importance and feasibility of interventions for TB control and prevention in elderly in IBCs. The local experts also commented on the two scoping studies conducted, thus act as the sixth phase of expert consultation in Arksey and O’Malley framework of scoping studies, to either nourish the scope and strategies used in these studies or to supplement ideas for further scoping or systematic review studies. In the subsequent stage, an international expert panel, comprised of 15 to 20 experts from IBCs in Western Pacific Region, will be recruited to join the two-round anonymous Delphi iterations. Four categories of TB control experts, namely clinicians, policy makers, microbiologists/ laboratory personnel, and public health clinicians will be our target groups. A consensus level of 80% is used to determine the achievement of consensus on particular issues. Key messages: 1. Scoping review and Delphi method are useful to identify gaps and then achieve consensus in research. 2. Lots of resources are put in the high burden countries now. However, the usually neglected intermediate-burden countries with TB is an indispensable part for achieving the ambitious WHO End TB 2035 target.Keywords: dephi questionnaire, tuberculosis, WHO, latent TB infection
Procedia PDF Downloads 307255 Climate Change, Women's Labour Markets and Domestic Work in Mexico
Authors: Luis Enrique Escalante Ochoa
Abstract:
This paper attempts to assess the impacts of Climate change (CC) on inequalities in the labour market. CC will have the most serious effects on some vulnerable economic sectors, such as agriculture, livestock or tourism, but also on the most vulnerable population groups. The objective of this research is to evaluate the impact of CC on the labour market and particularly on Mexican women. Influential documents such as the synthesis reports produced by the Intergovernmental Panel on Climate Change (IPCC) in 2007 and 2014 revived a global effort to counteract the effects of CC, called for an analysis of the impacts on vulnerable socio-economic groups and on economic activities, and for the development of decision-making tools to enable policy and other decisions based on the complexity of the world in relation to climate change, taking into account socio-economic attributes. We follow up this suggestion and determine the impact of CC on vulnerable populations in the Mexican labour market, taking into account two attributes (gender and level of qualification of workers). Most studies have focused on the effects of CC on the agricultural sector, as it is considered a highly vulnerable economic sector to the effects of climate variability. This research seeks to contribute to the existing literature taking into account, in addition to the agricultural sector, other sectors such as tourism, water availability, and energy that are of vital importance to the Mexican economy. Likewise, the effects of climate change will be extended to the labour market and specifically to women who in some cases have been left out. The studies are sceptical about the impact of CC on the female labour market because of the perverse effects on women's domestic work, which are too often omitted from analyses. This work will contribute to the literature by integrating domestic work, which in the case of Mexico is much higher among women than among men (80.9% vs. 19.1%), according to the 2009 time use survey. This study is relevant since it will allow us to analyse impacts of climate change not only in the labour market of the formal economy, but also in the non-market sphere. Likewise, we consider that including the gender dimension is valid for the Mexican economy as it is a country with high degrees of gender inequality in the labour market. In the OECD economic study for Mexico (2017), the low labour participation of Mexican women is highlighted. Although participation has increased substantially in recent years (from 36% in 1990 to 47% in 2017), it remains low compared to the OECD average where women participate around 70% of the labour market. According to Mexico's 2009 time use survey, domestic work represents about 13% of the total time available. Understanding the interdependence between the market and non-market spheres, and the gender division of labour within them is the necessary premise for any economic analysis aimed at promoting gender equality and inclusive growth.Keywords: climate change, labour market, domestic work, rural sector
Procedia PDF Downloads 135254 Trade in Value Added: The Case of the Central and Eastern European Countries
Authors: Łukasz Ambroziak
Abstract:
Although the impact of the production fragmentation on trade flows has been examined many times since the 1990s, the research was not comprehensive because of the limitations in traditional trade statistics. Early 2010s the complex databases containing world input-output tables (or indicators calculated on their basis) has made available. It increased the possibilities of examining the production sharing in the world. The trade statistic in value-added terms enables us better to estimate trade changes resulted from the internationalisation and globalisation as well as benefits of the countries from international trade. In the literature, there are many research studies on this topic. Unfortunately, trade in value added of the Central and Eastern European Countries (CEECs) has been so far insufficiently studied. Thus, the aim of the paper is to present changes in value added trade of the CEECs (Bulgaria, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia and Slovenia) in the period of 1995-2011. The concept 'trade in value added' or 'value added trade' is defined as the value added of a country which is directly and indirectly embodied in final consumption of another country. The typical question would be: 'How much value added is created in a country due to final consumption in the other countries?' The data will be downloaded from the World Input-Output Database (WIOD). The structure of this paper is as follows. First, theoretical and methodological aspects related to the application of the input-output tables in the trade analysis will be studied. Second, a brief survey of the empirical literature on this topic will be presented. Third, changes in exports and imports in value added of the CEECs will be analysed. A special attention will be paid to the differences in bilateral trade balances using traditional trade statistics (in gross terms) on one side, and value added statistics on the other. Next, in order to identify factors influencing value added exports and value added imports of the CEECs the generalised gravity model, based on panel data, will be used. The dependent variables will be value added exports and imports. The independent variables will be, among others, the level of GDP of trading partners, the level of GDP per capita of trading partners, the differences in GDP per capita, the level of the FDI inward stock, the geographical distance, the existence (or non-existence) of common border, the membership (or not) in preferential trade agreements or in the EU. For comparison, an estimation will also be made based on exports and imports in gross terms. The initial research results show that the gravity model better explained determinants of trade in value added than gross trade (R2 in the former is higher). The independent variables had the same direction of impact both on value added exports/imports and gross exports/imports. Only value of coefficients differs. The most difference concerned geographical distance. It had smaller impact on trade in value added than gross trade.Keywords: central and eastern European countries, gravity model, input-output tables, trade in value added
Procedia PDF Downloads 243253 A Damage-Plasticity Concrete Model for Damage Modeling of Reinforced Concrete Structures
Authors: Thanh N. Do
Abstract:
This paper addresses the modeling of two critical behaviors of concrete material in reinforced concrete components: (1) the increase in strength and ductility due to confining stresses from surrounding transverse steel reinforcements, and (2) the progressive deterioration in strength and stiffness due to high strain and/or cyclic loading. To improve the state-of-the-art, the author presents a new 3D constitutive model of concrete material based on plasticity and continuum damage mechanics theory to simulate both the confinement effect and the strength deterioration in reinforced concrete components. The model defines a yield function of the stress invariants and a compressive damage threshold based on the level of confining stresses to automatically capture the increase in strength and ductility when subjected to high compressive stresses. The model introduces two damage variables to describe the strength and stiffness deterioration under tensile and compressive stress states. The damage formulation characterizes well the degrading behavior of concrete material, including the nonsymmetric strength softening in tension and compression, as well as the progressive strength and stiffness degradation under primary and follower load cycles. The proposed damage model is implemented in a general purpose finite element analysis program allowing an extensive set of numerical simulations to assess its ability to capture the confinement effect and the degradation of the load-carrying capacity and stiffness of structural elements. It is validated against a collection of experimental data of the hysteretic behavior of reinforced concrete columns and shear walls under different load histories. These correlation studies demonstrate the ability of the model to describe vastly different hysteretic behaviors with a relatively consistent set of parameters. The model shows excellent consistency in response determination with very good accuracy. Its numerical robustness and computational efficiency are also very good and will be further assessed with large-scale simulations of structural systems.Keywords: concrete, damage-plasticity, shear wall, confinement
Procedia PDF Downloads 173252 Development and Validation of Cylindrical Linear Oscillating Generator
Authors: Sungin Jeong
Abstract:
This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator
Procedia PDF Downloads 196251 Public Participation for an Effective Flood Risk Management: Building Social Capacities in Ribera Alta Del Ebro, Spain
Authors: Alba Ballester Ciuró, Marc Pares Franzi
Abstract:
While coming decades are likely to see a higher flood risk in Europe and greater socio-economic damages, traditional flood risk management has become inefficient. In response to that, new approaches such as capacity building and public participation have recently been incorporated in natural hazards mitigation policy (i.e. Sendai Framework for Action, Intergovernmental Panel on Climate Change reports and EU Floods Directive). By integrating capacity building and public participation, we present a research concerning the promotion of participatory social capacity building actions for flood risk mitigation at the local level. Social capacities have been defined as the resources and abilities available at individual and collective level that can be used to anticipate, respond to, cope with, recover from and adapt to external stressors. Social capacity building is understood as a process of identifying communities’ social capacities and of applying collaborative strategies to improve them. This paper presents a proposal of systematization of participatory social capacity building process for flood risk mitigation, and its implementation in a high risk of flooding area in the Ebro river basin: Ribera Alta del Ebro. To develop this process, we designed and tested a tool that allows measuring and building five types of social capacities: knowledge, motivation, networks, participation and finance. The tool implementation has allowed us to assess social capacities in the area. Upon the results of the assessment we have developed a co-decision process with stakeholders and flood risk management authorities on which participatory activities could be employed to improve social capacities for flood risk mitigation. Based on the results of this process, and focused on the weaker social capacities, we developed a set of participatory actions in the area oriented to general public and stakeholders: informative sessions on flood risk management plan and flood insurances, interpretative river descents on flood risk management (with journalists, teachers, and general public), interpretative visit to the floodplain, workshop on agricultural insurance, deliberative workshop on project funding, deliberative workshops in schools on flood risk management (playing with a flood risk model). The combination of obtaining data through a mixed-methods approach of qualitative inquiry and quantitative surveys, as well as action research through co-decision processes and pilot participatory activities, show us the significant impact of public participation on social capacity building for flood risk mitigation and contributes to the understanding of which main factors intervene in this process.Keywords: flood risk management, public participation, risk reduction, social capacities, vulnerability assessment
Procedia PDF Downloads 218250 Optimizing Foaming Agents by Air Compression to Unload a Liquid Loaded Gas Well
Authors: Mhenga Agneta, Li Zhaomin, Zhang Chao
Abstract:
When velocity is high enough, gas can entrain fluid and carry to the surface, but as time passes by, velocity drops to a critical point where fluids will start to hold up in the tubing and cause liquid loading which prevents gas production and may lead to the death of the well. Foam injection is widely used as one of the methods to unload liquid. Since wells have different characteristics, it is not guaranteed that foam can be applied in all of them and bring successful results. This research presents a technology to optimize the efficiency of foam to unload liquid by air compression. Two methods are used to explain optimization; (i) mathematical formulas are used to solve and explain the myth of how density and critical velocity could be minimized when air is compressed into foaming agents, then the relationship between flow rates and pressure increase which would boost up the bottom hole pressure and increase the velocity to lift liquid to the surface. (ii) Experiments to test foam carryover capacity and stability as a function of time and surfactant concentration whereby three surfactants anionic sodium dodecyl sulfate (SDS), nonionic Triton 100 and cationic hexadecyltrimethylammonium bromide (HDTAB) were probed. The best foaming agents were injected to lift liquid loaded in a created vertical well model of 2.5 cm diameter and 390 cm high steel tubing covered by a transparent glass casing of 5 cm diameter and 450 cm high. The results show that, after injecting foaming agents, liquid unloading was successful by 75%; however, the efficiency of foaming agents to unload liquid increased by 10% with an addition of compressed air at a ratio of 1:1. Measured values and calculated values were compared and brought about ± 3% difference which is a good number. The successful application of the technology indicates that engineers and stakeholders could bring water flooded gas wells back to production with optimized results by firstly paying attention to the type of surfactants (foaming agents) used, concentration of surfactants, flow rates of the injected surfactants then compressing air to the foaming agents at a proper ratio.Keywords: air compression, foaming agents, gas well, liquid loading
Procedia PDF Downloads 138249 A Study on Characteristics of Runoff Analysis Methods at the Time of Rainfall in Rural Area, Okinawa Prefecture Part 2: A Case of Kohatu River in South Central Part of Okinawa Pref
Authors: Kazuki Kohama, Hiroko Ono
Abstract:
The rainfall in Japan is gradually increasing every year according to Japan Meteorological Agency and Intergovernmental Panel on Climate Change Fifth Assessment Report. It means that the rainfall difference between rainy season and non-rainfall is increasing. In addition, the increasing trend of strong rain for a short time clearly appears. In recent years, natural disasters have caused enormous human injuries in various parts of Japan. Regarding water disaster, local heavy rain and floods of large rivers occur frequently, and it was decided on a policy to promote hard and soft sides as emergency disaster prevention measures with water disaster prevention awareness social reconstruction vision. Okinawa prefecture in subtropical region has torrential rain and water disaster several times a year such as river flood, in which is caused in specific rivers from all 97 rivers. Also, the shortage of capacity and narrow width are characteristic of river in Okinawa and easily cause river flood in heavy rain. This study focuses on Kohatu River that is one of the specific rivers. In fact, the water level greatly rises over the river levee almost once a year but non-damage of buildings around. On the other hand in some case, the water level reaches to ground floor height of house and has happed nine times until today. The purpose of this research is to figure out relationship between precipitation, surface outflow and total treatment water quantity of Kohatu River. For the purpose, we perform hydrological analysis although is complicated and needs specific details or data so that, the method is mainly using Geographic Information System software and outflow analysis system. At first, we extract watershed and then divided to 23 catchment areas to understand how much surface outflow flows to runoff point in each 10 minutes. On second, we create Unit Hydrograph indicating the area of surface outflow with flow area and time. This index shows the maximum amount of surface outflow at 2400 to 3000 seconds. Lastly, we compare an estimated value from Unit Hydrograph to a measured value. However, we found that measure value is usually lower than measured value because of evaporation and transpiration. In this study, hydrograph analysis was performed using GIS software and outflow analysis system. Based on these, we could clarify the flood time and amount of surface outflow.Keywords: disaster prevention, water disaster, river flood, GIS software
Procedia PDF Downloads 143248 Designing of Induction Motor Efficiency Monitoring System
Authors: Ali Mamizadeh, Ires Iskender, Saeid Aghaei
Abstract:
Energy is one of the important issues with high priority property in the world. Energy demand is rapidly increasing depending on the growing population and industry. The useable energy sources in the world will be insufficient to meet the need for energy. Therefore, the efficient and economical usage of energy sources is getting more importance. In a survey conducted among electric consuming machines, the electrical machines are consuming about 40% of the total electrical energy consumed by electrical devices and 96% of this consumption belongs to induction motors. Induction motors are the workhorses of industry and have very large application areas in industry and urban systems like water pumping and distribution systems, steel and paper industries and etc. Monitoring and the control of the motors have an important effect on the operating performance of the motor, driver selection and replacement strategy management of electrical machines. The sensorless monitoring system for monitoring and calculating efficiency of induction motors are studied in this study. The equivalent circuit of IEEE is used in the design of this study. The terminal current and voltage of induction motor are used in this motor to measure the efficiency of induction motor. The motor nameplate information and the measured current and voltage are used in this system to calculate accurately the losses of induction motor to calculate its input and output power. The efficiency of the induction motor is monitored online in the proposed method without disconnecting the motor from the driver and without adding any additional connection at the motor terminal box. The proposed monitoring system measure accurately the efficiency by including all losses without using torque meter and speed sensor. The monitoring system uses embedded architecture and does not need to connect to a computer to measure and log measured data. The conclusion regarding the efficiency, the accuracy and technical and economical benefits of the proposed method are presented. The experimental verification has been obtained on a 3 phase 1.1 kW, 2-pole induction motor. The proposed method can be used for optimal control of induction motors, efficiency monitoring and motor replacement strategy.Keywords: induction motor, efficiency, power losses, monitoring, embedded design
Procedia PDF Downloads 354247 Redesigning Clinical and Nursing Informatics Capstones
Authors: Sue S. Feldman
Abstract:
As clinical and nursing informatics mature, an area that has gotten a lot of attention is the value capstone projects. Capstones are meant to address authentic and complex domain-specific problems. While capstone projects have not always been essential in graduate clinical and nursing informatics education, employers are wanting to see evidence of the prospective employee's knowledge and skills as an indication of employability. Capstones can be organized in many ways: a single course over a single semester, multiple courses over multiple semesters, as a targeted demonstration of skills, as a synthesis of prior knowledge and skills, mentored by one single person or mentored by various people, submitted as an assignment or presented in front of a panel. Because of the potential for capstones to enhance the educational experience, and as a mechanism for application of knowledge and demonstration of skills, a rigorous capstone can accelerate a graduate's potential in the workforce. In 2016, the capstone at the University of Alabama at Birmingham (UAB) could feel the external forces of a maturing Clinical and Nursing Informatics discipline. While the program had a capstone course for many years, it was lacking the depth of knowledge and demonstration of skills being asked for by those hiring in a maturing Informatics field. Since the program is online, all capstones were always in the online environment. While this modality did not change, other contributors to instruction modality changed. Pre-2016, the instruction modality was self-guided. Students checked in with a single instructor, and that instructor monitored progress across all capstones toward a PowerPoint and written paper deliverable. At the time, the enrollment was few, and the maturity had not yet pushed hard enough. By 2017, doubling enrollment and the increased demand of a more rigorously trained workforce led to restructuring the capstone so that graduates would have and retain the skills learned in the capstone process. There were three major changes: the capstone was broken up into a 3-course sequence (meaning it lasted about 10 months instead of 14 weeks), there were many chunks of deliverables, and each faculty had a cadre of about 5 students to advise through the capstone process. Literature suggests that the chunking, breaking up complex projects (i.e., the capstone in one summer) into smaller, more manageable chunks (i.e., chunks of the capstone across 3 semesters), can increase and sustain learning while allowing for increased rigor. By doing this, the teaching responsibility was shared across faculty with each semester course being taught by a different faculty member. This change facilitated delving much deeper in instruction and produced a significantly more rigorous final deliverable. Having students advised across the faculty seemed like the right thing to do. It not only shared the load, but also shared the success of students. Furthermore, it meant that students could be placed with an academic advisor who had expertise in their capstone area, further increasing the rigor of the entire capstone process and project and increasing student knowledge and skills.Keywords: capstones, clinical informatics, health informatics, informatics
Procedia PDF Downloads 136246 Educational Leadership Preparation Program Review of Employer Satisfaction
Authors: Glenn Koonce
Abstract:
There is a need to address the improvement of university educational leadership preparation programs through the processes of accreditation and continuous improvement. The program faculty in a university in the eastern part of the United States has incorporated an employer satisfaction focus group to address their national accreditation standard so that employers are satisfied with completers' preparation for the position of principal or assistant principal. Using the Council for the Accreditation of Educator Preparation (CAEP) required proficiencies, the following research questions are investigated: 1) what proficiencies do completers perform the strongest? 2) what proficiencies need to be strengthened? 3) what other strengths beyond the required proficiencies do completers demonstrate? 4) what other areas of responsibility beyond the required proficiencies do completers demonstrate? and 5) how can the program improve in preparing candidates for their positions? This study focuses on employers of one public school district that has a large number of educational leadership completers employed as principals and assistant principals. Central office directors who evaluate principals and principals who evaluate assistant principals are focus group participants. Construction of the focus group questions is a result of recommendations from an accreditation regulatory specialist, reviewed by an expert panel, and piloted by an experienced focus group leader. The focus group session was audio recorded, transcribed, and analyzed using the NVivo Version 14 software. After constructing folders in NVivo, the focus group transcript was loaded and skimmed by diagnosing significant statements and assessing core ideas for developing primary themes. These themes were aligned to address the research questions. From the transcript, codes were assigned to the themes and NVivo provided a coding hierarchy chart or graphical illustration for framing the coding. A final report of the coding process was designed using the primary themes and pertinent codes that were supported in excerpts from the transcript. The outcome of this study is to identify themes that can provide evidence that the educational leadership program is meeting its mission to improve PreK-12 student achievement through well-prepared completers who have achieved the position of principal or assistant principal. The considerations will be used to derive a composite profile of employers' satisfaction with program completers with the capacity to serve, influence, and thrive as educational leaders. Analysis of the idealized themes will result in identifying issues that may challenge university educational leadership programs to improve. Results, conclusions, and recommendations are used for continuous improvement, which is another national accreditation standard required for the program.Keywords: educational leadership preparation, CAEP accreditation, principal & assistant principal evaluations, continuous improvement
Procedia PDF Downloads 33245 A Measurement Instrument to Determine Curricula Competency of Licensure Track Graduate Psychotherapy Programs in the United States
Authors: Laith F. Gulli, Nicole M. Mallory
Abstract:
We developed a novel measurement instrument to assess Knowledge of Educational Programs in Professional Psychotherapy Programs (KEP-PPP or KEP-Triple P) within the United States. The instrument was designed by a Panel of Experts (PoE) that consisted of Licensed Psychotherapists and Medical Care Providers. Licensure track psychotherapy programs are listed in the databases of the Commission on Accreditation for Marriage and Family Therapy Education (COAMFTE); American Psychological Association (APA); Council on Social Work Education (CSWE); and the Council for Accreditation of Counseling & Related Educational Programs (CACREP). A complete list of psychotherapy programs can be obtained from these professional databases, selecting search fields of (All Programs) in (All States). Each program has a Web link that electronically and directly connects to the institutional program, which can be researched using the KEP-Triple P. The 29-item KEP Triple P was designed to consist of six categorical fields; Institutional Type: Degree: Educational Delivery: Accreditation: Coursework Competency: and Special Program Considerations. The KEP-Triple P was designed to determine whether a specific course(s) is offered in licensure track psychotherapy programs. The KEP-Triple P is designed to be modified to assess any part or the entire curriculum of licensure graduate programs. We utilized the KEP-Triple P instrument to study whether a graduate course in Addictions was offered in Marriage and Family Therapy (MFT) programs. Marriage and Family Therapists are likely to commonly encounter patients with Addiction(s) due to the broad treatment scope providing psychotherapy services to individuals, couples and families of all age groups. Our study of 124 MFT programs which concluded at the end of 2016 found that we were able to assess 61 % of programs (N = 76) since 27 % (N = 34) of programs were inaccessible due to broken Web links. From the total of all MFT programs 11 % (N = 14) did not have a published curriculum on their Institutional Web site. From the sample study, we found that 66 % (N = 50) of curricula did not offer a course in Addiction Treatment and that 34 % (N =26) of curricula did require a mandatory course in Addiction Treatment. From our study sample, we determined that 15 % (N = 11) of MFT doctorate programs did not require an Addictions Treatment course and that 1 % (N = 1) did require such a course. We found that 99 % of our study sample offered a Campus based program and 1 % offered a hybrid program with both online and residential components. From the total sample studied, we determined that 84 % of programs would be able to obtain reaccreditation within a five-year period. We recommend that MFT programs initiate procedures to revise curricula to include a required course in Addiction Treatment prior to their next accreditation cycle, to improve the escalating addiction crisis in the United States. This disparity in MFT curricula raises serious ethical and legal consideration for national and Federal stakeholders as well as for patients seeking a competently trained psychotherapist.Keywords: addiction, competency, curriculum, psychotherapy
Procedia PDF Downloads 154244 Mirna Expression Profile is Different in Human Amniotic Mesenchymal Stem Cells Isolated from Obese Respect to Normal Weight Women
Authors: Carmela Nardelli, Laura Iaffaldano, Valentina Capobianco, Antonietta Tafuto, Maddalena Ferrigno, Angela Capone, Giuseppe Maria Maruotti, Maddalena Raia, Rosa Di Noto, Luigi Del Vecchio, Pasquale Martinelli, Lucio Pastore, Lucia Sacchetti
Abstract:
Maternal obesity and nutrient excess in utero increase the risk of future metabolic diseases in the adult life. The mechanisms underlying this process are probably based on genetic, epigenetic alterations and changes in foetal nutrient supply. In mammals, the placenta is the main interface between foetus and mother, it regulates intrauterine development, modulates adaptive responses to sub optimal in uterus conditions and it is also an important source of human amniotic mesenchymal stem cells (hA-MSCs). We previously highlighted a specific microRNA (miRNA) profiling in amnion from obese (Ob) pregnant women, here we compared the miRNA expression profile of hA-MSCs isolated from (Ob) and control (Co) women, aimed to search for any alterations in metabolic pathways that could predispose the new-born to the obese phenotype. Methods: We isolated, at delivery, hA-MSCs from amnion of 16 Ob- and 7 Co-women with pre-pregnancy body mass index (mean/SEM) 40.3/1.8 and 22.4/1.0 kg/m2, respectively. hA-MSCs were phenotyped by flow cytometry. Globally, 384 miRNAs were evaluated by the TaqMan Array Human MicroRNA Panel v 1.0 (Applied Biosystems). By the TargetScan program we selected the target genes of the miRNAs differently expressed in Ob- vs Co-hA-MSCs; further, by KEGG database, we selected the statistical significant biological pathways. Results: The immunophenotype characterization confirmed the mesenchymal origin of the isolated hA-MSCs. A large percentage of the tested miRNAs, about 61.4% (232/378), was expressed in hA-MSCs, whereas 38.6% (146/378) was not. Most of the expressed miRNAs (89.2%, 207/232) did not differ between Ob- and Co-hA-MSCs and were not further investigated. Conversely, 4.8% of miRNAs (11/232) was higher and 6.0% (14/232) was lower in Ob- vs Co-hA-MSCs. Interestingly, 7/232 miRNAs were obesity-specific, being expressed only in hA-MSCs isolated from obese women. Bioinformatics showed that these miRNAs significantly regulated (P<0.001) genes belonging to several metabolic pathways, i.e. MAPK signalling, actin cytoskeleton, focal adhesion, axon guidance, insulin signaling, etc. Conclusions: Our preliminary data highlight an altered miRNA profile in Ob- vs Co-hA-MSCs and suggest that an epigenetic miRNA-based mechanism of gene regulation could affect pathways involved in placental growth and function, thereby potentially increasing the newborn’s risk of metabolic diseases in the adult life.Keywords: hA-MSCs, obesity, miRNA, biosystem
Procedia PDF Downloads 530243 Pond Site Diagnosis: Monoclonal Antibody-Based Farmer Level Tests to Detect the Acute Hepatopancreatic Necrosis Disease in Shrimp
Authors: B. T. Naveen Kumar, Anuj Tyagi, Niraj Kumar Singh, Visanu Boonyawiwat, A. H. Shanthanagouda, Orawan Boodde, K. M. Shankar, Prakash Patil, Shubhkaramjeet Kaur
Abstract:
Early mortality syndrome (EMS)/Acute Hepatopancreatic Necrosis Disease (AHPND) has emerged as a major obstacle for the shrimp farming around the world. It is caused by a strain of Vibrio parahaemolyticus. The possible preventive and control measure is, early and rapid detection of the pathogen in the broodstock, post-larvae and monitoring the shrimp during the culture period. Polymerase chain reaction (PCR) based early detection methods are good, but they are costly, time taking and requires a sophisticated laboratory. The present study was conducted to develop a simple, sensitive and rapid diagnostic farmer level kit for the reliable detection of AHPND in shrimp. A panel of monoclonal antibodies (MAbs) were raised against the recombinant Pir B protein (rPirB). First, an immunodot was developed by using MAbs G3B8 and Mab G3H2 which showed specific reactivity to purified r-PirB protein with no cross-reactivity to other shrimp bacterial pathogens (AHPND free Vibrio parahaemolyticus (Indian strains), V. anguillarum, WSSV, Aeromonas hydrophila, and Aphanomyces invadans). Immunodot developed using Mab G3B8 is more sensitive than that with the Mab G3H2. However, immunodot takes almost 2.5 hours to complete with several hands-on steps. Therefore, the flow-through assay (FTA) was developed by using a plastic cassette containing the nitrocellulose membrane with absorbing pads below. The sample was dotted in the test zone on the nitrocellulose membrane followed by continuos addition of five solutions in the order of i) blocking buffer (BSA) ii) primary antibody (MAb) iii) washing Solution iv) secondary antibody and v) chromogen substrate (TMB) clear purple dots against a white background were considered as positive reactions. The FTA developed using MAbG3B8 is more sensitive than that with MAb G3H2. In FTA the two MAbs showed specific reactivity to purified r-PirB protein and not to other shrimp bacterial pathogens. The FTA is simple to farmer/field level, sensitive and rapid requiring only 8-10 min for completion. Tests can be developed to kits, which will be ideal for use in biosecurity, for the first line of screening (at the port or pond site) and during monitoring and surveillance programmes overall for the good management practices to reduce the risk of the disease.Keywords: acute hepatopancreatic necrosis disease, AHPND, flow-through assay, FTA, farmer level, immunodot, pond site, shrimp
Procedia PDF Downloads 180242 Generating a Multiplex Sensing Platform for the Accurate Diagnosis of Sepsis
Authors: N. Demertzis, J. L. Bowen
Abstract:
Sepsis is a complex and rapidly evolving condition, resulting from uncontrolled prolonged activation of host immune system due to pathogenic insult. The aim of this study is the development of a multiplex electrochemical sensing platform, capable of detecting both pathogen associated and host immune markers to enable the rapid and definitive diagnosis of sepsis. A combination of aptamers and molecular imprinting approaches have been employed to generate sensing systems for lipopolysaccharide (LPS), c-reactive protein (CRP) and procalcitonin (PCT). Gold working electrodes were mechanically polished and electrochemically cleaned with 0.1 M sulphuric acid using cyclic voltammetry (CV). Following activation, a self-assembled monolayer (SAM) was generated, by incubating the electrodes with a thiolated anti-LPS aptamer / dithiodibutiric acid (DTBA) mixture (1:20). 3-aminophenylboronic acid (3-APBA) in combination with the anti-LPS aptamer was used for the development of the hybrid molecularly imprinted sensor (apta-MIP). Aptasensors, targeting PCT and CRP were also fabricated, following the same approach as in the case of LPS, with mercaptohexanol (MCH) replacing DTBA. In the case of the CRP aptasensor, the SAM was formed following incubation of a 1:1 aptamer: MCH mixture. However, in the case of PCT, the SAM was formed with the aptamer itself, with subsequent backfilling with 1 μM MCH. The binding performance of all systems has been evaluated using electrochemical impedance spectroscopy. The apta-MIP’s polymer thickness is controlled by varying the number of electropolymerisation cycles. In the ideal number of polymerisation cycles, the polymer must cover the electrode surface and create a binding pocket around LPS and its aptamer binding site. Less polymerisation cycles will create a hybrid system which resembles an aptasensor, while more cycles will be able to cover the complex and demonstrate a bulk polymer-like behaviour. Both aptasensor and apta-MIP were challenged with LPS and compared to conventional imprinted (absence of aptamer from the binding site, polymer formed in presence of LPS) and non-imprinted polymers (NIPS, absence of LPS whilst hybrid polymer is formed). A stable LPS aptasensor, capable of detecting down to 5 pg/ml of LPS was generated. The apparent Kd of the system was estimated at 17 pM, with a Bmax of approximately 50 pM. The aptasensor demonstrated high specificity to LPS. The apta-MIP demonstrated superior recognition properties with a limit of detection of 1 fg/ml and a Bmax of 100 pg/ml. The CRP and PCT aptasensors were both able to detect down to 5 pg/ml. Whilst full binding performance is currently being evaluated, there is none of the sensors demonstrate cross-reactivity towards LPS, CRP or PCT. In conclusion, stable aptasensors capable of detecting LPS, PCT and CRP at low concentrations have been generated. The realisation of a multiplex panel such as described herein, will effectively contribute to the rapid, personalised diagnosis of sepsis.Keywords: aptamer, electrochemical impedance spectroscopy, molecularly imprinted polymers, sepsis
Procedia PDF Downloads 132241 Multiphase Flow Regime Detection Algorithm for Gas-Liquid Interface Using Ultrasonic Pulse-Echo Technique
Authors: Serkan Solmaz, Jean-Baptiste Gouriet, Nicolas Van de Wyer, Christophe Schram
Abstract:
Efficiency of the cooling process for cryogenic propellant boiling in engine cooling channels on space applications is relentlessly affected by the phase change occurs during the boiling. The effectiveness of the cooling process strongly pertains to the type of the boiling regime such as nucleate and film. Geometric constraints like a non-transparent cooling channel unable to use any of visualization methods. The ultrasonic (US) technique as a non-destructive method (NDT) has therefore been applied almost in every engineering field for different purposes. Basically, the discontinuities emerge between mediums like boundaries among different phases. The sound wave emitted by the US transducer is both transmitted and reflected through a gas-liquid interface which makes able to detect different phases. Due to the thermal and structural concerns, it is impractical to sustain a direct contact between the US transducer and working fluid. Hence the transducer should be located outside of the cooling channel which results in additional interfaces and creates ambiguities on the applicability of the present method. In this work, an exploratory research is prompted so as to determine detection ability and applicability of the US technique on the cryogenic boiling process for a cooling cycle where the US transducer is taken place outside of the channel. Boiling of the cryogenics is a complex phenomenon which mainly brings several hindrances for experimental protocol because of thermal properties. Thus substitute materials are purposefully selected based on such parameters to simplify experiments. Aside from that, nucleate and film boiling regimes emerging during the boiling process are simply simulated using non-deformable stainless steel balls, air-bubble injection apparatuses and air clearances instead of conducting a real-time boiling process. A versatile detection algorithm is perennially developed concerning exploratory studies afterward. According to the algorithm developed, the phases can be distinguished 99% as no-phase, air-bubble, and air-film presences. The results show the detection ability and applicability of the US technique for an exploratory purpose.Keywords: Ultrasound, ultrasonic, multiphase flow, boiling, cryogenics, detection algorithm
Procedia PDF Downloads 173240 Sensory Characteristics of White Chocolate Enriched with Encapsulated Raspberry Juice
Authors: Ivana Loncarevic, Biljana Pajin, Jovana Petrovic, Danica Zaric, Vesna Tumbas Saponjac, Aleksandar Fistes
Abstract:
Chocolate is a food that activates pleasure centers in the human brain. In comparison to black and milk chocolate, white chocolate does not contain fat-free cocoa solids and thus lacks bioactive components. The aim of this study was to examine the sensory characteristics of enriched white chocolate with the addition of 10% of raspberry juice encapsulated in maltodextrins (denoted as encapsulate). Chocolate is primarily intended for enjoyment, and therefore, the sensory expectation is a critical factor for consumers when selecting a new type of chocolate. Consumer acceptance of chocolate depends primarily on the appearance and taste, but also very much on the mouthfeel, which mainly depends on the particle size of chocolate. Chocolate samples were evaluated by a panel of 8 trained panelists, food technologists, trained according to ISO 8586 (2012). Panelists developed the list of attributes to be used in this study: intensity of red color (light to dark); glow on the surface (mat to shiny); texture on snap (appearance of cavities or holes on the snap surface that are seen - even to gritty); hardness (hardness felt during the first bite of chocolate sample in half by incisors - soft to hard); melting (the time needed to convert solid chocolate into a liquid state – slowly to quickly); smoothness (perception of evenness of chocolate during melting - very even to very granular); fruitiness (impression of fruity taste - light fruity notes to distinct fruity notes); sweetness (organoleptic characteristic of pure substance or mixture giving sweet taste - lightly sweet to very sweet). The chocolate evaluation was carried out 24 h after sample preparation in the sensory laboratory, in partitioned booths, which were illuminated with fluorescent lights (ISO 8589, 2007). Samples were served in white plastic plates labeled with three-digit codes from a random number table. Panelist scored the perceived intensity of each attribute using a 7-point scale (1 = the least intensity and 7 = the most intensity) (ISO 4121, 2002). The addition of 10% of encapsulate had a big influence on chocolate color, where enriched chocolate got a nice reddish color. At the same time, the enriched chocolate sample had less intensity of gloss on the surface. The panelists noticed that addition of encapsulate reduced the time needed to convert solid chocolate into a liquid state, increasing its hardness. The addition of encapsulate had a significant impact on chocolate flavor. It reduced the sweetness of white chocolate and contributed to the fruity raspberry flavor.Keywords: white chocolate, encapsulated raspberry juice, color, sensory characteristics
Procedia PDF Downloads 166239 The Effect of Social Media Influencer on Boycott Participation through Attitude toward the Offending Country in a Situational Animosity Context
Authors: Hsing-Hua Stella Chang, Mong-Ching Lin, Cher-Min Fong
Abstract:
Using surrogate boycotts as a coercive tactic to force the offending party into changing its approaches has been increasingly significant over the last several decades, and is expected to increase in the future. Research shows that surrogate boycotts are often triggered by controversial international events, and particular foreign countries serve as the offending party in the international marketplace. In other words, multinational corporations are likely to become surrogate boycott targets in overseas markets because of the animosity between their home and host countries. Focusing on the surrogate boycott triggered by a severe situation animosity, this research aims to examine how social media influencers (SMIs) serving as electronic key opinion leaders (EKOLs) in an international crisis facilitate and organize a boycott, and persuade consumers to participate in the boycott. This research suggests that SMIs could be a particularly important information source in a surrogate boycott sparked by a situation of animosity. This research suggests that under such a context, SMIs become a critical information source for individuals to enhance and update their understanding of the event because, unlike traditional media, social media serve as a platform for instant and 24-hour non-stop information access and dissemination. The Xinjiang cotton event was adopted as the research context, which was viewed as an ongoing inter-country conflict, reflecting a crisis, which provokes animosity against the West. Through online panel services, both studies recruited Mainland Chinese nationals to be respondents to the surveys. The findings show that: 1. Social media influencer message is positively related to a negative attitude toward the offending country. 2. Attitude toward the offending country is positively related to boycotting participation. To address the unexplored question – of the effect of social media influencer influence on consumer participation in boycotts, this research presents a finer-grained examination of boycott motivation, with a special focus on a situational animosity context. This research is split into two interrelated parts. In the first part, this research shows that attitudes toward the offending country can be socially constructed by the influence of social media influencers in a situational animosity context. The study results show that consumers perceive different strengths of social pressure related to various levels of influencer messages and thus exhibit different levels of attitude toward the offending country. In the second part, this research further investigates the effect of attitude toward the offending country on boycott participation. The study findings show that such attitude exacerbated the effect of social media influencer messages on boycott participation in a situation of animosity.Keywords: animosity, social media marketing, boycott, attitude toward the offending country
Procedia PDF Downloads 118238 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals
Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya
Abstract:
The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses
Procedia PDF Downloads 328237 Electrochemical and Microstructure Properties of Chromium-Graphene and SnZn-Graphene Oxide Composite Coatings
Authors: Rekha M. Y., Punith Kumar, Anshul Kamboj, Chandan Srivastava
Abstract:
Coatings plays an important role in providing protection for a substrate and in improving the surface quality. Graphene/graphene oxide (GO) using in coating systems provides an environmental friendly solution towards protection against corrosion. Issues such as, lack of scale, high cost, low quality limits the practical application of graphene/GO as corrosion resistant coating material. One other way to employ these materials for corrosion protection is to incorporate them into coatings that are conventionally used for corrosion protection. Due to the extraordinary properties of graphene/GO, it has been demonstrated that the coatings containing graphene/GO are more corrosion resistant than pure metal/alloy coatings. In the present work, Cr-graphene and SnZn-GO composite coatings were investigated in enhancing the corrosion resistant property when compared to pure Cr coating and pure SnZn coating respectively. All the coatings were electrodeposited over mild-steel substrate. Graphene and GO were synthesized by electrochemical exfoliation method and modified Hummers’ method respectively. In Cr coatings, the microstructural study revealed that the addition of formic acid in the coatings reduced the number of cracks in the coatings. Further addition of graphene in Cr coating enhanced the Cr coating’s morphology. Chemically synthesized ZnO nanoparticles were also embedded in the as-deposited Cr and Cr-graphene coatings to enhance the adhesion of the coating, to improve the surface finish and to increase the corrosion resistant property of the coatings. Diffraction analysis revealed that the addition of graphene also altered the texture of the Cr coatings. In SnZn alloy coatings, the morphological and topographical characterization revealed that the relative smoothness and compactness of the coatings increased with increase in the addition of GO in the coatings. The microstructural investigation revealed large-scale segregation of Zn-rich and Sn-rich phases in the pure SnZn coating. However, in SnZn-GO composite coating the uniform distribution of Zn phase in the Sn-rich matrix was observed. This distribution caused the early and uniform formation of ZnO, which is the corrosion product, yielding better corrosion resistance for the SnZn-GO composite coatings as compared to pure SnZn coating. A significant improvement in corrosion resistance in terms of reduction in corrosion current and corrosion rate and increase in the polarization resistance was observed in Cr coating containing graphene and in SnZn coatings containing GO.Keywords: coatings, corrosion, electrodeposition, graphene, graphene-oxide
Procedia PDF Downloads 186236 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term
Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu
Abstract:
In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records
Procedia PDF Downloads 219235 On the Survival of Individuals with Type 2 Diabetes Mellitus in the United Kingdom: A Retrospective Case-Control Study
Authors: Njabulo Ncube, Elena Kulinskaya, Nicholas Steel, Dmitry Pshezhetskiy
Abstract:
Life expectancy in the United Kingdom (UK) has been near constant since 2010, particularly for the individuals of 65 years and older. This trend has been also noted in several other countries. This slowdown in the increase of life expectancy was concurrent with the increase in the number of deaths caused by non-communicable diseases. Of particular concern is the world-wide exponential increase in the number of diabetes related deaths. Previous studies have reported increased mortality hazards among diabetics compared to non-diabetics, and on the differing effects of antidiabetic drugs on mortality hazards. This study aimed to estimate the all-cause mortality hazards and related life expectancies among type 2 diabetes (T2DM) patients in the UK using the time-variant Gompertz-Cox model with frailty. The study also aimed to understand the major causes of the change in life expectancy growth in the last decade. A total of 221 182 (30.8% T2DM, 57.6% Males) individuals aged 50 years and above, born between 1930 and 1960, inclusive, and diagnosed between 2000 and 2016, were selected from The Health Improvement Network (THIN) database of the UK primary care data and followed up to 31 December 2016. About 13.4% of participants died during the follow-up period. The overall all-cause mortality hazard ratio of T2DM compared to non-diabetic controls was 1.467 (1.381-1.558) and 1.38 (1.307-1.457) when diagnosed between 50 to 59 years and 60 to 74 years, respectively. The estimated life expectancies among T2DM individuals without further comorbidities diagnosed at the age of 60 years were 2.43 (1930-1939 birth cohort), 2.53 (1940-1949 birth cohort) and 3.28 (1950-1960 birth cohort) years less than those of non-diabetic controls. However, the 1950-1960 birth cohort had a steeper hazard function compared to the 1940-1949 birth cohort for both T2DM and non-diabetic individuals. In conclusion, mortality hazards for people with T2DM continue to be higher than for non-diabetics. The steeper mortality hazard slope for the 1950-1960 birth cohort might indicate the sub-population contributing to a slowdown in the growth of the life expectancy.Keywords: T2DM, Gompetz-Cox model with frailty, all-cause mortality, life expectancy
Procedia PDF Downloads 121234 Experimental and Numerical Investigation of Micro-Welding Process and Applications in Digital Manufacturing
Authors: Khaled Al-Badani, Andrew Norbury, Essam Elmshawet, Glynn Rotwell, Ian Jenkinson , James Ren
Abstract:
Micro welding procedures are widely used for joining materials, developing duplex components or functional surfaces, through various methods such as Micro Discharge Welding or Spot Welding process, which can be found in the engineering, aerospace, automotive, biochemical, biomedical and numerous other industries. The relationship between the material properties, structure and processing is very important to improve the structural integrity and the final performance of the welded joints. This includes controlling the shape and the size of the welding nugget, state of the heat affected zone, residual stress, etc. Nowadays, modern high volume productions require the welding of much versatile shapes/sizes and material systems that are suitable for various applications. Hence, an improved understanding of the micro welding process and the digital tools, which are based on computational numerical modelling linking key welding parameters, dimensional attributes and functional performance of the weldment, would directly benefit the industry in developing products that meet current and future market demands. This paper will introduce recent work on developing an integrated experimental and numerical modelling code for micro welding techniques. This includes similar and dissimilar materials for both ferrous and non-ferrous metals, at different scales. The paper will also produce a comparative study, concerning the differences between the micro discharge welding process and the spot welding technique, in regards to the size effect of the welding zone and the changes in the material structure. Numerical modelling method for the micro welding processes and its effects on the material properties, during melting and cooling progression at different scales, will also be presented. Finally, the applications of the integrated numerical modelling and the material development for the digital manufacturing of welding, is discussed with references to typical application cases such as sensors (thermocouples), energy (heat exchanger) and automotive structures (duplex steel structures).Keywords: computer modelling, droplet formation, material distortion, materials forming, welding
Procedia PDF Downloads 260233 A Simplified, Low-Cost Mechanical Design for an Automated Motorized Mechanism to Clean Large Diameter Pipes
Authors: Imad Khan, Imran Shafi, Sarmad Farooq
Abstract:
Large diameter pipes, barrels, tubes, and ducts are used in a variety of applications covering civil and defense-related technologies. This may include heating/cooling networks, sign poles, bracing, casing, and artillery and tank gun barrels. These large diameter assemblies require regular inspection and cleaning to increase their life and reduce replacement costs. This paper describes the design, development, and testing results of an efficient yet simplified, low maintenance mechanical design controlled with minimal essential electronics using an electric motor for a non-technical staff. The proposed solution provides a simplified user interface and an automated cleaning mechanism that requires a single user to optimally clean pipes and barrels in the range of 105 mm to 203 mm caliber. The proposed system employs linear motion of specially designed brush along the barrel using a chain of specific strength and a pulley anchor attached to both ends of the barrel. A specially designed and manufactured gearbox is coupled with an AC motor to allow movement of contact brush with high torque to allow efficient cleaning. A suitably powered AC motor is fixed to the front adapter mounted on the muzzle side whereas the rear adapter has a pulley-based anchor mounted towards the breach block in case of a gun barrel. A mix of soft nylon and hard copper bristles-based large surface brush is connected through a strong steel chain to motor and anchor pulley. The system is equipped with limit switches to auto switch the direction when one end is reached on its operation. The testing results based on carefully established performance indicators indicate the superiority of the proposed user-friendly cleaning mechanism vis-à-vis its life cycle cost.Keywords: pipe cleaning mechanism, limiting switch, pipe cleaning robot, large pipes
Procedia PDF Downloads 116232 A Cross-Cultural Validation of the Simple Measure of Impact of Lupus Erythematosus in Youngsters (Smiley) among Filipino Pediatric Lupus Patients
Authors: Jemely M. Punzalan, Christine B. Bernal, Beatrice B. Canonigo, Maria Rosario F. Cabansag, Dennis S. Flores, Paul Joseph T. Galutira, Remedios D. Chan
Abstract:
Background: Systemic lupus erythematosus (SLE) is one of the most common autoimmune disorders predominates in women of childbearing age. Simple Measure of Impact of Lupus Erythematosus in Youngsters (SMILEY) is the only health specific quality of life tool for pediatric SLE, which has been translated to different languages except in Filipino. Objective: The primary objective of this study was to develop a Filipino translation of the SMILEY and to examine the validity and reliability of this translation. Methodology: The SMILEY was translated into Filipino by a bilingual individual and back-translated by another bilingual individual blinded from the original English version. The translation was evaluated for content validity by a panel of experts and subjected to pilot testing. The pilot-tested translation was used in the validity and reliability testing proper. The SMILEY, together with the previously validated PEDSQL 4.0 Generic Core Scale was administered to lupus pediatric patients and their parent at two separate occasions: a baseline and a re-test seven to fourteen days apart. Tests for convergent validity, internal consistency, and test-retest reliability were performed. Results: A total of fifty children and their parent were recruited. The mean age was 15.38±2.62 years (range 8-18 years), mean education at high school level. The mean duration of SLE was 28 months (range 1-81 months). Subjects found the questionnaires to be relevant, easy to understand and answer. The validity of the SMILEY was demonstrated in terms of content validity, convergent validity, internal consistency, and test-retest reliability. Age, socioeconomic status and educational attainment did not show a significant effect on the scores. The difference between scores of child and parent report was showed to be significant with SMILEY total (p=0.0214), effect on social life (p=0.0000), and PEDSQL physical function (p=0.0460). Child reports showed higher scores for the following domains compared to their parent. Conclusion: SMILEY is a brief, easy to understand, valid and reliable tool for assessing pediatric SLE specific HRQOL. It will be useful in providing better care, understanding and may offer critical information regarding the effect of SLE in the quality of life of our pediatric lupus patients. It will help physician understands the needs of their patient not only on treatment of the specific disease but as well as the impact of the treatment on their daily lives.Keywords: systemic lupus erythematosus, pediatrics, quality of life, Simple Measure of Impact of Lupus Erythematosus in Youngsters (SMILEY)
Procedia PDF Downloads 448231 The Use of Palm Kernel Cake in Ration and Its Influence on VFA, NH3 and pH Rumen Fluid of Goat
Authors: Arief, Noovirman Jamarun, Benni Satria
Abstract:
Background: The main problem in the development of livestock in Indonesia is feed both in terms of quality and quantity. On the other hand, conventional feed ingredients are expensive and difficult to obtain. Therefore, it is necessary to find alternative feed ingredients that have good quality, potential, and low cost. Feed ingredients that meet the above requirements are by-products of the palm oil industry, namely palm kernel cake (PKC). This study aims to obtain the best PKC composition for Etawa goat concentrate ration. Material and Methode : This research consists of 2 stages. Stage I is invitro study using Tilley and Terry method. The study used a Completely Randomized Design (CRD) with 4 treatments of rations and 4 replications. The treatment is the composition of the use of palm kernel cake (PKC) in the ration, namely, A). 10%, B). 20%, C). 30%, D). 40%. Other feed ingredients are corn, rice bran, tofu waste and minerals. The measured variables are the characteristics of the rumen fluid (pH, VFA and NH3). Stage II was done using the best ration of stage I (Ration C), followed by testing the use of Tithonia (Thitonia difersifolia) and agricultural waste of sweet potato leaves as a source of forage for livestock by in-vitro. The study used a Completely Randomized Design (CRD) with 3 treatments and 5 replications. The treatments were: Treatment A) Best Concentrate Ration Stage I + Titonia (Thitonia difersifolia), Treatment B) Best Concentrate Ration Stage I + Tithonia (Thitonia difersifolia) and Sweet potato Leaves, Treatment C) Best Concentrate Ration Stage I + Sweet potato leaves. The data obtained were analyzed using variance analysis while the differences between treatments were tested using the Duncant Multiple Range Test (DMRT) according to Steel and Torrie. Results of Stage II showed that the use of PKC in rations as concentrate feed combined with forage originating from Tithonia (Thitonia difersifolia) and sweet potato leaves produced pH, VFA and NH3-N which were still in normal conditions. The best treatment was obtained from diet B (P <0.05) with 6.9 pH, 116.29 mM VFA and 15mM NH3-N. Conclussion From the results of the study it can be concluded that PKC can be used as feed ingredients for dairy goat concentrate with a combination of forage from Tithonia (Tithonia difersifolia) and sweet potato leaves.Keywords: palm oil by-product, palm kernel cake, concentrate, rumen fluid, Etawa goat
Procedia PDF Downloads 179230 Life Cycle Assessment of a Parabolic Solar Cooker
Authors: Bastien Sanglard, Lou Magnat, Ligia Barna, Julian Carrey, Sébastien Lachaize
Abstract:
Cooking is a primary need for humans, several techniques being used around the globe based on different sources of energy: electricity, solid fuel (wood, coal...), fuel or liquefied petroleum gas. However, all of them leads to direct or indirect greenhouse gas emissions and sometimes health damage in household. Therefore, the solar concentrated power represent a great option to lower the damages because of a cleaner using phase. Nevertheless, the construction phase of the solar cooker still requires primary energy and materials, which leads to environmental impacts. The aims of this work is to analyse the ecological impacts of a commercialaluminium parabola and to compare it with other means of cooking, taking the boiling of 2 litres of water three times a day during 40 years as the functional unit. Life cycle assessment was performed using the software Umberto and the EcoInvent database. Calculations were realized over more than 13 criteria using two methods: the international panel on climate change method and the ReCiPe method. For the reflector itself, different aluminium provenances were compared, as well as the use of recycled aluminium. For the structure, aluminium was compared to iron (primary and recycled) and wood. Results show that climate impacts of the studied parabola was 0.0353 kgCO2eq/kWh when built with Chinese aluminium and can be reduced by 4 using aluminium from Canada. Assessment also showed that using 32% of recycled aluminium would reduce the impact by 1.33 and 1.43 compared to the use of primary Canadian aluminium and primary Chinese aluminium, respectively. The exclusive use of recycled aluminium lower the impact by 17. Besides, the use of iron (recycled or primary) or wood for the structure supporting the reflector significantly lowers the impact. The impact categories of the ReCiPe method show that the parabola made from Chinese aluminium has the heaviest impact - except for metal resource depletion - compared to aluminium from Canada, recycled aluminium or iron. Impact of solar cooking was then compared to gas stove and induction. The gas stove model was a cast iron tripod that supports the cooking pot, and the induction plate was as well a single spot plate. Results show the parabolic solar cooker has the lowest ecological impact over the 13 criteria of the ReCiPe method and over the global warming potential compared to the two other technologies. The climate impact of gas cooking is 0.628kgCO2/kWh when used with natural gas and 0.723 kgCO2/kWh when used with a bottle of gas. In each case, the main part of emissions came from gas burning. Induction cooking has a global warming potential of 0.12 kgCO2eq/kWh with the electricity mix of France, 96.3% of the impact being due to electricity production. Therefore, the electricity mix is a key factor for this impact: for instance, with the electricity mix of Germany and Poland, impacts are 0.81kgCO2eq/kWh and 1.39 kgCO2eq/kWh, respectively. Therefore, the parabolic solar cooker has a real ecological advantages compared to both gas stove and induction plate.Keywords: life cycle assessement, solar concentration, cooking, sustainability
Procedia PDF Downloads 192229 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management
Authors: Michael McCann
Abstract:
Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.Keywords: corporate governance, boards of directors, agency theory, earnings management
Procedia PDF Downloads 238228 Spatiotemporal Evaluation of Climate Bulk Materials Production in Atmospheric Aerosol Loading
Authors: Mehri Sadat Alavinasab Ashgezari, Gholam Reza Nabi Bidhendi, Fatemeh Sadat Alavinasab Ashkezari
Abstract:
Atmospheric aerosol loading (AAL) from anthropogenic sources is an evidence in industrial development. The accelerated trends in material consumption at the global scale in recent years demonstrate consumption paradigms sensible to the planetary boundaries (PB). This paper is a statistical approach on recognizing the path of climate-relevant bulk materials production (CBMP) of steel, cement and plastics to AAL via an updated and validated spatiotemporal distribution. The methodology of statistical analysis used the most updated regional or global databases or instrumental technologies. This corresponded to a selection of processes and areas capable for tracking AAL within the last decade, analyzing the most validated data while leading to explore the behavior functions or models. The results also represented a correlation within socio economic metabolism idea between the materials specified as macronutrients of society and AAL as a PB with an unknown threshold. The selected country contributors of China, India, US and the sample country of Iran show comparable cumulative AAL values vs to the bulk materials domestic extraction and production rate in the study period of 2012 to 2022. Generally, there is a tendency towards gradual descend in the worldwide and regional aerosol concentration after 2015. As of our evaluation, a considerable share of human role, equivalent 20% from CBMP, is for the main anthropogenic species of aerosols, including sulfate, black carbon and organic particulate matters too. This study, in an innovative approach, also explores the potential role of AAL control mechanisms from the economy sectors where ordered and smoothing loading trends are accredited through the disordered phenomena of CBMP and aerosol precursor emissions. The equilibrium states envisioned is an approval to the well-established theory of Spin Glasses applicable in physical system like the Earth and here to AAL.Keywords: atmospheric aeroso loading, material flows, climate bulk materials, industrial ecology
Procedia PDF Downloads 85227 Impact of National Institutions on Corporate Social Performance
Authors: Debdatta Mukherjee, Abhiman Das, Amit Garg
Abstract:
In recent years, there is a growing interest about corporate social responsibility of firms in both academic literature and business world. Since business forms a part of society incorporating socio-environment concerns into its value chain, activities are vital for ensuring mutual sustainability and prosperity. But, until now most of the works have been either descriptive or normative rather than positivist in tone. Even the few ones with a positivist approach have mostly studied the link between corporate financial performance and corporate social performance. However, these studies have been severely criticized by many eminent authors on grounds that they lack a theoretical basis for their findings. They have also argued that apart from corporate financial performance, there must be certain other crucial influences that are likely to determine corporate social performance of firms. In fact, several studies have indicated that firms operating in distinct national institutions show significant variations in the corporate social responsibility practices that they undertake. This clearly suggests that the institutional context of a country in which the firms operate is a key determinant of corporate social performance of firms. Therefore, this paper uses an institutional framework to understand why corporate social performance of firms vary across countries. It examines the impact of country level institutions on corporate social performance using a sample of 3240 global publicly-held firms across 33 countries covering the period 2010-2015. The country level institutions include public institutions, private institutions, markets and capacity to innovate. Econometric Analysis has been mainly used to assess this impact. A three way panel data analysis using fixed effects has been used to test and validate appropriate hypotheses. Most of the empirical findings confirm our hypotheses and the economic significance indicates the specific impact of each variable and their importance relative to others. The results suggest that institutional determinants like ethical behavior of private institutions, goods market, labor market and innovation capacity of a country are significantly related to the corporate social performance of firms. Based on our findings, few implications for policy makers from across the world have also been suggested. The institutions in a country should promote competition. The government should use policy levers for upgrading home demands, like setting challenging yet flexible safety, quality and environment standards, and framing policies governing buyer information, providing innovative recourses to low quality goods and services and promoting early adoption of new and technologically advanced products. Moreover, the institution building in a country should be such that they facilitate and improve the capacity of firms to innovate. Therefore, the proposed study argues that country level institutions impact corporate social performance of firms, empirically validates the same, suggest policy implications and attempts to contribute to an extended understanding of corporate social responsibility and corporate social performance in a multinational context.Keywords: corporate social performance, corporate social responsibility, institutions, markets
Procedia PDF Downloads 170