Search results for: forward stage
3988 Wireless Sensor Networks Optimization by Using 2-Stage Algorithm Based on Imperialist Competitive Algorithm
Authors: Hamid R. Lashgarian Azad, Seyed N. Shetab Boushehri
Abstract:
Wireless sensor networks (WSN) have become progressively popular due to their wide range of applications. Wireless Sensor Network is made of numerous tiny sensor nodes that are battery-powered. It is a very significant problem to maximize the lifetime of wireless sensor networks. In this paper, we propose a two-stage protocol based on an imperialist competitive algorithm (2S-ICA) to solve a sensor network optimization problem. The energy of the sensors can be greatly reduced and the lifetime of the network reduced by long communication distances between the sensors and the sink. We can minimize the overall communication distance considerably, thereby extending the lifetime of the network lifetime through connecting sensors into a series of independent clusters using 2SICA. Comparison results of the proposed protocol and LEACH protocol, which is common to solving WSN problems, show that our protocol has a better performance in terms of improving network life and increasing the number of transmitted data.Keywords: wireless sensor network, imperialist competitive algorithm, LEACH protocol, k-means clustering
Procedia PDF Downloads 1033987 TutorBot+: Automatic Programming Assistant with Positive Feedback based on LLMs
Authors: Claudia Martínez-Araneda, Mariella Gutiérrez, Pedro Gómez, Diego Maldonado, Alejandra Segura, Christian Vidal-Castro
Abstract:
The purpose of this document is to showcase the preliminary work in developing an EduChatbot-type tool and measuring the effects of its use aimed at providing effective feedback to students in programming courses. This bot, hereinafter referred to as tutorBot+, was constructed based on chatGPT and is tasked with assisting and delivering timely positive feedback to students in the field of computer science at the Universidad Católica de Concepción. The proposed working method consists of four stages: (1) Immersion in the domain of Large Language Models (LLMs), (2) Development of the tutorBot+ prototype and integration, (3) Experiment design, and (4) Intervention. The first stage involves a literature review on the use of artificial intelligence in education and the evaluation of intelligent tutors, as well as research on types of feedback for learning and the domain of chatGPT. The second stage encompasses the development of tutorBot+, and the final stage involves a quasi-experimental study with students from the Programming and Database labs, where the learning outcome involves the development of computational thinking skills, enabling the use and measurement of the tool's effects. The preliminary results of this work are promising, as a functional chatBot prototype has been developed in both conversational and non-conversational versions integrated into an open-source online judge and programming contest platform system. There is also an exploration of the possibility of generating a custom model based on a pre-trained one tailored to the domain of programming. This includes the integration of the created tool and the design of the experiment to measure its utility.Keywords: assessment, chatGPT, learning strategies, LLMs, timely feedback
Procedia PDF Downloads 683986 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach
Authors: Mohammad H. Almomani
Abstract:
In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization
Procedia PDF Downloads 3553985 Basics of Gamma Ray Burst and Its Afterglow
Authors: Swapnil Kumar Singh
Abstract:
Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.Keywords: GRB, synchrotron, X-ray, isotropic energy
Procedia PDF Downloads 883984 Piracy in Southeast Asian Waters: Problems, Legal Measures and Way Forward
Authors: Ahmad Almaududy Amri
Abstract:
Southeast Asia is considered as an area which is important in terms of piratical studies. There are several reasons to this argument: firstly, it has the second highest figure of piracy attacks in the world from 2008 to 2012. Only the African Region transcends the number of piracies that were committed in Southeast Asia. Secondly, the geographical location of the region is very important to world trade. There are several sea lanes and straits which are normally used for international navigation mainly for trade purposes. In fact, there are six out of 25 busiest ports all over the world located in Southeast Asia. In ancient times, the main drivers of piracy were raiding for plunder and capture of slaves; however, in modern times, developments in politics, economics and even military technology have drastically altered the universal crime of piracy. There are a variety of motives behind modern day piracy including economic gains from receiving ransoms from government or ship companies, political and even terrorist reasons. However, it cannot be denied that piratical attacks persist and continue. States have taken measures both at the international and regional level in order to eradicate piratical attacks. The United Nations Convention on the Law of the Sea and the Convention on the Suppression of Unlawful Act against the Safety of Navigation served as the two main international legal frameworks in combating piracy. At the regional level, Regional Cooperation Agreement against Piracy and Armed Robbery and ASEAN measures are regard as prominent in addressing the piracy problem. This paper will elaborate the problems of piracy in Southeast Asia and examine the adequacy of legal frameworks at both the international and regional levels in order address the current legal measures in combating piracy. Furthermore, it will discuss current challenges in the implementation of anti-piracy measures at the international and regional levels as well as the way forward in addressing the issue.Keywords: piracy, Southeast Asia, maritime security, legal frameworks
Procedia PDF Downloads 5033983 Higher Education and Empowerment of Women in Assam (India): An Empirical Analysis
Authors: Anupam Deka, Indira Bardoloi
Abstract:
Gender discrimination has been considered as a major obstacle in granting equal opportunity for women in higher education as education plays a pivotal role in a country’s socioeconomic development. To examine the empowerment of women in the higher education field of Assam, a case study has been carried out. In the first stage, an overview of enrollment of students in different courses has been made by considering the whole state. In the second stage, a study has been conducted regarding the enrollment of students in various degree and postgraduate courses for the period 2000-2007 at Gauhati University (one of the four universities of Assam), and the relevant data has been collected. It has been found that though the enrollment of students in the degree levels has been constantly increasing, but the enrollment of girls are not proportionately increasing, especially in commerce and law. On the other hand, in the postgraduate level, these proportions are higher in almost all subjects (except some subjects like M. COM., L.L.M, M. C. A., Mathematics, etc.), indicating that compared to boys, a higher number of girls are being admitted in postgraduate courses.Keywords: field study, enrollment of girls in degree and postgratudate levels, regression lines, chi square test, diagrams, statistical tables
Procedia PDF Downloads 2573982 The Necessity to Standardize Procedures of Providing Engineering Geological Data for Designing Road and Railway Tunneling Projects
Authors: Atefeh Saljooghi Khoshkar, Jafar Hassanpour
Abstract:
One of the main problems of the design stage relating to many tunneling projects is the lack of an appropriate standard for the provision of engineering geological data in a predefined format. In particular, this is more reflected in highway and railroad tunnel projects in which there is a number of tunnels and different professional teams involved. In this regard, comprehensive software needs to be designed using the accepted methods in order to help engineering geologists to prepare standard reports, which contain sufficient input data for the design stage. Regarding this necessity, applied software has been designed using macro capabilities and Visual Basic programming language (VBA) through Microsoft Excel. In this software, all of the engineering geological input data, which are required for designing different parts of tunnels, such as discontinuities properties, rock mass strength parameters, rock mass classification systems, boreability classification, the penetration rate, and so forth, can be calculated and reported in a standard format.Keywords: engineering geology, rock mass classification, rock mechanic, tunnel
Procedia PDF Downloads 803981 Significance of Personnel Recruitment in Implementation of Computer Aided Design Curriculum of Architecture Schools
Authors: Kelechi E. Ezeji
Abstract:
The inclusion of relevant content in curricula of architecture schools is vital for attainment of Computer Aided Design (CAD) proficiency by graduates. Implementing this content involves, among other variables, the presence of competent tutors. Consequently, this study sought to investigate the importance of personnel recruitment for inclusion of content vital to the implementation of CAD in the curriculum for architecture education. This was with a view to developing a framework for appropriate implementation of CAD curriculum. It was focused on departments of architecture in universities in south-east Nigeria which have been accredited by National Universities Commission. Survey research design was employed. Data were obtained from sources within the study area using questionnaires, personal interviews, physical observation/enumeration and examination of institutional documents. A multi-stage stratified random sampling method was adopted. The first stage of stratification involved random sampling by balloting of the departments. The second stage involved obtaining respondents’ population from the number of staff and students of sample population. Chi Square analysis tool for nominal variables and Pearson’s product moment correlation test for interval variables were used for data analysis. With ρ < 0.5, the study found significant correlation between the number of CAD literate academic staff and use of CAD in design studio/assignments; that increase in the overall number of teaching staff significantly affected total CAD credit units in the curriculum of the department. The implications of these findings were that for successful implementation leading to attainment of CAD proficiency to occur, CAD-literacy should be a factor in the recruitment of staff and a policy of in-house training should be pursued.Keywords: computer-aided design, education, personnel recruitment, curriculum
Procedia PDF Downloads 2093980 Comparative Study of Music-Therapy Types on Anxiety in Early Stage Cancer Patients: A Randomized Clinical Trial
Authors: Farnaz Dehkhoda
Abstract:
This study was conducted to compare the effectiveness of active and receptive music-therapy on anxiety in cancer patients undergoing chemotherapy or radiotherapy. 184 young adult patients, who were diagnosed with early stage cancer and were undergoing treatment, were divided into three groups. Two groups received music therapy as a parallel treatment and the third group was control group. In active music-therapy, a music specialist helped the patients to play guitar and sing. In the receptive music-therapy, patients preferred pre-recorded music played by MP3 player. The level of anxiety was measured by the Beck Anxiety Inventory as pre-test and post-test. ANCOVA revealed that both types of music-therapy reduced anxiety level of patients and the active music-therapy intervention found to be more effective. The results suggest that music-therapy can be applied as an intervention method contemporary with cancer medical treatment, for improving quality of life in cancer patients by reducing their anxiety.Keywords: Anxiety, Cancer, Chemotherapy, Music-therapy
Procedia PDF Downloads 1813979 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process
Authors: Reyna Singh, David Lokhat, Milan Carsky
Abstract:
The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.Keywords: catalyst, coal, liquefaction, temperature-staged
Procedia PDF Downloads 6483978 Risk Factors Associated with Obesity Among Adults in Tshikota, Makhado Municipality, Limpopo Province, South Africa
Authors: Ndou Rembuluwani Moddy, Daniel Ter Goon, Takalani Grace Tshitangano, Lindelani Fhumudzani Mushaphi
Abstract:
Obesity is a global public health problem. The study aimed to determine the risk factors associated with and the consequences of obesity among residents of Tshikota, Makhado Municipality, Limpopo Province, South Africa. A cross-sectional study involving 318 randomly selected adults aged 18-45 years residing at Tshikota, Makhado Local Municipality, South Africa. Sociodemographic information includes age, gender, educational level, occupation, behavioral lifestyle, environmental, psychological, and family history. Anthropometric, blood pressure, and blood glucose measurements followed standard procedure. The prevalence of obesity and overweight was 35.5% and 28.6%, respectively. About 75.2% of obese do not engage in physical activity. Most participants (63.5%) take meals three times a day, and 19.2% do not skip breakfast. Most participants do not have access to fruits and vegetables. Participants who were pre-hypertensive were 92(28.9%) and 32(10.1%) were in Stage 1 hypertension. Of the participants with Class 1 obesity, 40.9% were pre-hypertensive, and 15.2% were in Stage 1 hypertension. In Class 2 obesity, 37.8% were pre-hypertensive, and 26.7% were in Stage 1 hypertension. There was a significant difference between BMI and blood pressure among participants (p=0.00). About 6.1% of the participants in Class 1 obesity were at high risk, and 3.0% were at very high risk of glucose levels. Regarding cholesterol levels, 65 (20.4%) were at borderline, and 17(5.3%) were at high risk. There was no significant difference in BMI and cholesterol levels among participants (p= 0.20). The prevalence of obesity and overweight was high among residents of this setting. Age, marital and educational status, and employment were significantly associated with obesity. An obesity awareness campaign is crucial, and the availability of supermarkets and full-service grocery stores would provide accessibility to healthy food such as fruits and vegetables.Keywords: obesity, overweight, risk factors, adults.
Procedia PDF Downloads 873977 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers
Authors: Margarita Dufresne
Abstract:
This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel
Procedia PDF Downloads 703976 Evaluation of MPPT Algorithms for Photovoltaic Generator by Comparing Incremental Conductance Method, Perturbation and Observation Method and the Method Using Fuzzy Logic
Authors: Elmahdi Elgharbaoui, Tamou Nasser, Ahmed Essadki
Abstract:
In the era of sustainable development, photovoltaic (PV) technology has shown significant potential as a renewable energy source. Photovoltaic generators (GPV) have a non-linear current-voltage characteristic, with a maximum power point (MPP) characterized by an optimal voltage, and depends on environmental factors such as temperature and irradiation. To extract each time the maximum power available at the terminals of the GPV and transfer it to the load, an adaptation stage is used, consisting of a boost chopper controlled by a maximum power point tracking technique (MPPT) through a stage of pulse width modulation (PWM). Our choice has focused on three techniques which are: the perturbation and observation method (P&O), the incremental conductance method (InCond) and the last is that of control using the fuzzy logic. The implementation and simulation of the system (photovoltaic generator, chopper boost, PWM and MPPT techniques) are then performed in the Matlab/Simulink environment.Keywords: photovoltaic generator, technique MPPT, boost chopper, PWM, fuzzy logic, P&O, InCond
Procedia PDF Downloads 3233975 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 2193974 The Efficacy of Salicylic Acid and Puccinia Triticina Isolates Priming Wheat Plant to Diuraphis Noxia Damage
Authors: Huzaifa Bilal
Abstract:
Russian wheat aphid (Diuraphis noxia, Kurdjumov) is considered an economically important wheat (Triticum aestivum L.) pest worldwide and in South Africa. The RWA damages wheat plants and reduces annual yields by more than 10%. Even though pest management by pesticides and resistance breeding is an attractive option, chemicals can cause harm to the environment. Furthermore, the evolution of resistance-breaking aphid biotypes has out-paced the release of resistant cultivars. An alternative strategy to reduce the impact of aphid damage on plants, such as priming, which sensitizes plants to respond effectively to subsequent attacks, is necessary. In this study, wheat plants at the seedling and flag leaf stages were primed by salicylic acid and isolate representative of two races of the leaf rust pathogen Puccinia triticina Eriks. (Pt), before RWA (South African RWA biotypes 1 and 4) infestation. Randomized complete block design experiments were conducted in the greenhouse to study plant-pest interaction in primed and non-primed plants. Analysis of induced aphid damage indicated salicylic acid differentially primed wheat cultivars for increased resistance to the RWASA biotypes. At the seedling stage, all cultivars were primed for enhanced resistance to RWASA1, while at the flag leaf stage, only PAN 3111, SST 356 and Makalote were primed for increased resistance. The Puccinia triticina efficaciously primed wheat cultivars for excellent resistance to RWASA1 at the seedling and flag leaf stages. However, Pt failed to enhance the four Lesotho cultivars' resistance to RWASA4 at the seedling stage and PAN 3118 at the flag leaf stage. The induced responses at the seedling and flag leaf stages were positively correlated in all the treatments. Primed plants induced high activity of antioxidant enzymes like peroxidase, ascorbate peroxidase and superoxide dismutase. High antioxidant activity indicates activation of resistant responses in primed plants (primed by salicylic acid and Puccina triticina). Isolates of avirulent Pt races can be a worthy priming agent for improved resistance to RWA infestation. Further confirmation of the priming effects needs to be evaluated at the field trials to investigate its application efficiency.Keywords: Russian wheat aphis, salicylic acid, puccina triticina, priming
Procedia PDF Downloads 2083973 Risk Management Practices In The Construction Industry In Malawi
Authors: Taonga Temwani Chibaka
Abstract:
This qualitative research study was conducted to identify the common risk factors that affect the construction industry in Malawi in the building and infrastructure (civil works) projects. The study then evaluates the possible risk responses that are done to mitigate the various risk factors that were identified. I addition the research also established the barriers to risk management implementation with lastly mapping out as where the identified risk factors fall on which stage of the project and then also map out the knowledge areas that need to be worked on the cases on Malawian construction industry in order to mitigate most of the identified risk factors. The study involved the interviewing the professionals from the construction industry in Malawi where insights and ideas were collected, analysed and interpreted. The key study findings show that risks related to clients group are perceived as most critical followed by the contractor related, consultant related and then external group related factors respectively where preventive measures are the most applied risk response technique where the aim to avoid most of the risk factors from happening. Most of the risk factors identified were internal risks and in managerial category which suggested that risk planning was to be emphasized at pre-contract stage to minimize these risks since a bigger percentage of the risk factors were mapped out at implementation stage. Furthermore, barriers to risk management were identified and the key barriers were lack of awareness; lack of knowledge; lack of formal policies in place; regarded as costly and limited time which resulted in proposing that regulating authorities to purposefully introduce intense training on risk management to make known of this new knowledge area. The study then recommends that organisation should formally implement risk management where policies should be introduced to enforce all parties to undertake this. Risk planning was regarded as paramount and this to be done from pre-contract phase so as to mitigate 80% of the risk factors. Finally, training should be done on all project management knowledge areas.Keywords: risk management, risk factors, risks, malawi
Procedia PDF Downloads 3223972 Alternative General Formula to Estimate and Test Influences of Early Diagnosis on Cancer Survival
Authors: Li Yin, Xiaoqin Wang
Abstract:
Background and purpose: Cancer diagnosis is part of a complex stochastic process, in which patients' personal and social characteristics influence the choice of diagnosing methods, diagnosing methods, in turn, influence the initial assessment of cancer stage, the initial assessment, in turn, influences the choice of treating methods, and treating methods in turn influence cancer outcomes such as cancer survival. To evaluate diagnosing methods, one needs to estimate and test the causal effect of a regime of cancer diagnosis and treatments. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to estimate and test these causal effects via point effects. The purpose of the work is to estimate and test causal effects under various regimes of cancer diagnosis and treatments via point effects. Challenges and solutions: The cancer stage has influences from earlier diagnosis as well as on subsequent treatments. As a consequence, it is highly difficult to estimate and test the causal effects via standard parameters, that is, the conditional survival given all stationary covariates, diagnosing methods, cancer stage and prognosis factors, treating methods. Instead of standard parameters, we use the point effects of cancer diagnosis and treatments to estimate and test causal effects under various regimes of cancer diagnosis and treatments. We are able to use familiar methods in the framework of single-point causal inference to accomplish the task. Achievements: we have applied this method to stomach cancer survival from a clinical study in Sweden. We have studied causal effects under various regimes, including the optimal regime of diagnosis and treatments and the effect moderation of the causal effect by age and gender.Keywords: cancer diagnosis, causal effect, point effect, G-formula, sequential causal effect
Procedia PDF Downloads 1953971 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters
Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev
Abstract:
Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters
Procedia PDF Downloads 1973970 Kant’s Conception of Human Dignity and the Importance of Singularity within Commonality
Authors: Francisco Lobo
Abstract:
Kant’s household theory of human dignity as a common feature of all rational beings is the starting point of any intellectual endeavor to unravel the implications of this normative notion. Yet, it is incomplete, as it neglects considering the importance of the singularity or uniqueness of the individual. In a first, deconstructive stage, this paper describes the Kantian account of human dignity as one among many conceptions of human dignity. It reads carefully into the original wording used by Kant in German and its English translations, as well as the works of modern commentators, to identify its shortcomings. In a second, constructive stage, it then draws on the theories of Aristotle, Alexis de Tocqueville, John Stuart Mill, and Hannah Arendt to try and enhance the Kantian conception, in the sense that these authors give major importance to the singularity of the individual. The Kantian theory can be perfected by including elements from the works of these authors, while at the same time being mindful of the dangers entailed in focusing too much on singularity. The conclusion of this paper is that the Kantian conception of human dignity can be enhanced if it acknowledges that not only morality has dignity, but also the irreplaceable human individual to the extent that she is a narrative, original creature with the potential to act morally.Keywords: commonality, dignity, Kant, singularity
Procedia PDF Downloads 2833969 A Neural Network for the Prediction of Contraction after Burn Injuries
Authors: Ginger Egberts, Marianne Schaaphok, Fred Vermolen, Paul van Zuijlen
Abstract:
A few years ago, a promising morphoelastic model was developed for the simulation of contraction formation after burn injuries. Contraction can lead to a serious reduction in physical mobility, like a reduction in the range-of-motion of joints. If this is the case in a healing burn wound, then this is referred to as a contracture that needs medical intervention. The morphoelastic model consists of a set of partial differential equations describing both a chemical part and a mechanical part in dermal wound healing. These equations are solved with the numerical finite element method (FEM). In this method, many calculations are required on each of the chosen elements. In general, the more elements, the more accurate the solution. However, the number of elements increases rapidly if simulations are performed in 2D and 3D. In that case, it not only takes longer before a prediction is available, the computation also becomes more expensive. It is therefore important to investigate alternative possibilities to generate the same results, based on the input parameters only. In this study, a surrogate neural network has been designed to mimic the results of the one-dimensional morphoelastic model. The neural network generates predictions quickly, is easy to implement, and there is freedom in the choice of input and output. Because a neural network requires extensive training and a data set, it is ideal that the one-dimensional FEM code generates output quickly. These feed-forward-type neural network results are very promising. Not only can the network give faster predictions, but it also has a performance of over 99%. It reports on the relative surface area of the wound/scar, the total strain energy density, and the evolutions of the densities of the chemicals and mechanics. It is, therefore, interesting to investigate the applicability of a neural network for the two- and three-dimensional morphoelastic model for contraction after burn injuries.Keywords: biomechanics, burns, feasibility, feed-forward NN, morphoelasticity, neural network, relative surface area wound
Procedia PDF Downloads 553968 Unpredictable Territorial Interiority: Learning the Spatiality from the Early Space Learners
Authors: M. Mirza Y. Harahap
Abstract:
This paper explores the interiority of children’s territorialisation in domestic space context by looking at their affective relations with their surroundings. Examining its spatiality, the research focuses on the interactions that developed between the children and the things which exist in their house, specifically those which left traces, indicating the very arena of their territory. As early learners, the children whose mind and body are still in the development stage are hypothetically distinct in the way they territorialise the space. Rule, common sense and other form of common acceptances among the adults might not be relevant with their way on territorialising the space. Unpredictability-ness, inappropriateness, and unimaginableness hypothetically characterise their unique endeavour when territorialising the space. The purpose might even be insignificant, expressing their very development which unrestricted. This indicates how the interiority of children’s territorialisation in a domestic space context actually is. It would also implicate on a new way of seeing territory since territorialisation act has natural purpose: to aim the space and regard them as his/her own. Aiming to disclose the above territorialisation characteristics, this paper addresses a qualitative study which covers a comprehensive analysis as follow: 1) Collecting various territorial traces left from the children activities within their respective houses. Further within this stage, the data is categorised based on the territorial strategy and tactic. This stage would particularly result in the overall map of the children’s territorial interiority which expresses its focuses, range and ways; 2) Examining the interactions occurred between the children and the spatial elements within the house. Stressing on the affective relations, this stage revealed the immaterial aspect of the children’s territorialisation, thus disclosed the unseen spatial aspect of territorialisation; and 3) Synthesising the previous two stages. Correlating the results from the two stages would then help us to understand the children’s unpredictable, inappropriate and unimaginable territorial interiority. This would also help us to justify how the children learn the space through territorialisation act, its importance and its position in interiority conception. The discussed relation between the children and the houses that cover both its physical and imaginary entity as part of their overall dwelling space would also help us to have a better understanding towards specific spatial elements which are significant and undeniably important for children’s spatial learning process. Particularly for this last finding, it would also help us to determine what kind of spatial elements which are necessary to be existed in a house, thus help for design development purpose. Overall, the study in this paper would help us to broaden our mindset regarding the territory, dwelling, interiority and the overall interior architecture conception, promising a chance for further research within interior architecture field.Keywords: children, interiority, relation, territory
Procedia PDF Downloads 1393967 Blockchain for the Monitoring and Reporting of Carbon Emission Trading: A Case Study on Its Possible Implementation in the Danish Energy Industry
Authors: Nkechi V. Osuji
Abstract:
The use of blockchain to address the issue of climate change is increasingly a discourse among countries, industries, and stakeholders. For a long time, the European Union (EU) has been combating the issue of climate action in industries through sustainability programs. One of such programs is the EU monitoring reporting and verification (MRV) program of the EU ETS. However, the system has some key challenges and areas for improvement, which makes it inefficient. The main objective of the research is to look at how blockchain can be used to improve the inefficiency of the EU ETS program for the Danish energy industry with a focus on its monitoring and reporting framework. Applying empirical data from 13 semi-structured expert interviews, three case studies, and literature reviews, three outcomes are presented in the study. The first is on the current conditions and challenges of monitoring and reporting CO₂ emission trading. The second is putting into consideration if blockchain is the right fit to solve these challenges and how. The third stage looks at the factors that might affect the implementation of such a system and provides recommendations to mitigate these challenges. The first stage of the findings reveals that the monitoring and reporting of CO₂ emissions is a mandatory requirement by law for all energy operators under the EU ETS program. However, most energy operators are non-compliant with the program in reality, which creates a gap and causes challenges in the monitoring and reporting of CO₂ emission trading. Other challenges the study found out are the lack of transparency, lack of standardization in CO₂ accounting, and the issue of double-counting in the current system. The second stage of the research was guided by three case studies and requirement engineering (RE) to explore these identified challenges and if blockchain is the right fit to address them. This stage of the research addressed the main research question: how can blockchain be used for monitoring and reporting CO₂ emission trading in the energy industry. Through analysis of the study data, the researcher developed a conceptual private permissioned Hyperledger blockchain and elucidated on how it can address the identified challenges. Particularly, the smart contract of blockchain was highlighted as a key feature. This is because of its ability to automate, be immutable, and digitally enforce negotiations without a middleman. These characteristics are unique in solving the issue of compliance, transparency, standardization, and double counting identified. The third stage of the research presents technological constraints and a high level of stakeholder collaboration as major factors that might affect the implementation of the proposed system. The proposed conceptual model requires high-level integration with other technologies such as the Internet of Things (IoT) and machine learning. Therefore, the study encourages future research in these areas. This is because blockchain is continually evolving its technology capabilities. As such, it remains a topic of interest in research and development for addressing climate change. Such a study is a good contribution to creating sustainable practices to solve the global climate issue.Keywords: blockchain, carbon emission trading, European Union emission trading system, monitoring and reporting
Procedia PDF Downloads 1283966 Non-Population Search Algorithms for Capacitated Material Requirement Planning in Multi-Stage Assembly Flow Shop with Alternative Machines
Authors: Watcharapan Sukkerd, Teeradej Wuttipornpun
Abstract:
This paper aims to present non-population search algorithms called tabu search (TS), simulated annealing (SA) and variable neighborhood search (VNS) to minimize the total cost of capacitated MRP problem in multi-stage assembly flow shop with two alternative machines. There are three main steps for the algorithm. Firstly, an initial sequence of orders is constructed by a simple due date-based dispatching rule. Secondly, the sequence of orders is repeatedly improved to reduce the total cost by applying TS, SA and VNS separately. Finally, the total cost is further reduced by optimizing the start time of each operation using the linear programming (LP) model. Parameters of the algorithm are tuned by using real data from automotive companies. The result shows that VNS significantly outperforms TS, SA and the existing algorithm.Keywords: capacitated MRP, tabu search, simulated annealing, variable neighborhood search, linear programming, assembly flow shop, application in industry
Procedia PDF Downloads 2333965 Arsenic Removal by Membrane Technology, Adsorption and Ion Exchange: An Environmental Lifecycle Assessment
Authors: Karan R. Chavan, Paula Saavalainen, Kumudini V. Marathe, Riitta L. Keiski, Ganapati D. Yadav
Abstract:
Co-contamination of groundwaters by arsenic in different forms is often observed around the globe. Arsenic is introduced into the waters by several mechanisms and different technologies are proposed and practiced for effective removal. The assessment of three prominent technologies, namely, adsorption, ion exchange and nanofiltration was carried out in this study based on lifecycle methodology. The life of the technologies was divided into two stages: cradle to gate (C-G) and gate to gate (G-G), in order to find out the impacts in different categories of environmental burdens, human health and resource consumption. Life cycle inventory was estimated by use of models and design equations concerning with the different technologies. Regeneration was considered for each technology and over the course of its full lifetime. The impact values of adsorption technology for the C-G stage are greater by thousand times (103) and million times (106) compared to ion exchange and nanofiltration technologies, respectively. The impact of G-G stage of the lifecycle is the major contributor of the impact for all the 3 technologies due to electricity consumption during the operation. Overall, the ion Exchange technology fares well in this study of removal of As (V) only.Keywords: arsenic, nanofiltration, lifecycle assessment, membrane technology
Procedia PDF Downloads 2453964 An Ergonomic Evaluation of Three Load Carriage Systems for Reducing Muscle Activity of Trunk and Lower Extremities during Giant Puppet Performing Tasks
Authors: Cathy SW. Chow, Kristina Shin, Faming Wang, B. C. L. So
Abstract:
During some dynamic giant puppet performances, an ergonomically designed load carrier system is necessary for the puppeteers to carry a giant puppet body’s heavy load with minimum muscle stress. A load carrier (i.e. prototype) was designed with two small wheels on the foot; and a hybrid spring device on the knee in order to assist the sliding and knee bending movements respectively. Thus, the purpose of this study was to evaluate the effect of three load carriers including two other commercially available load mounting systems, Tepex and SuitX, and the prototype. Ten male participants were recruited for the experiment. Surface electromyography (sEMG) was used to collect the participants’ muscle activities during forward moving and bouncing and with and without load of 11.1 kg that was 60 cm above the shoulder. Five bilateral muscles including the lumbar erector spinae (LES), rectus femoris (RF), bicep femoris (BF), tibialis anterior (TA), and gastrocnemius (GM) were selected for data collection. During forward moving task, the sEMG data showed smallest muscle activities by Tepex harness which exhibited consistently the lowest, compared with the prototype and SuitX which were significantly higher on left LES 68.99% and 64.99%, right LES 26.57% and 82.45%; left RF 87.71% and 47.61%, right RF 143.57% and 24.28%; left BF 80.21% and 22.23%, right BF 96.02% and 21.83%; right TA 6.32% and 4.47%; left GM 5.89% and 12.35% respectively. The result above reflected mobility was highly restricted by tested exoskeleton devices. On the other hand, the sEMG data from bouncing task showed the smallest muscle activities by prototype which exhibited consistently the lowest, compared with the Tepex harness and SuitX which were significantly lower on lLES 6.65% and 104.93, rLES 23.56% and 92.19%; lBF 33.21% and 93.26% and rBF 24.70% and 81.16%; lTA 46.51% and 191.02%; rTA 12.75% and 125.76%; IGM 31.54% and 68.36%; rGM 95.95% and 96.43% respectively.Keywords: exoskeleton, giant puppet performers, load carriage system, surface electromyography
Procedia PDF Downloads 1073963 Three-Stage Mining Metals Supply Chain Coordination and Product Quality Improvement with Revenue Sharing Contract
Authors: Hamed Homaei, Iraj Mahdavi, Ali Tajdin
Abstract:
One of the main concerns of miners is to increase the quality level of their products because the mining metals price depends on their quality level; however, increasing the quality level of these products has different costs at different levels of the supply chain. These costs usually increase after extractor level. This paper studies the coordination issue of a decentralized three-level supply chain with one supplier (extractor), one mineral processor and one manufacturer in which the increasing product quality level cost at the processor level is higher than the supplier and at the level of the manufacturer is more than the processor. We identify the optimal product quality level for each supply chain member by designing a revenue sharing contract. Finally, numerical examples show that the designed contract not only increases the final product quality level but also provides a win-win condition for all supply chain members and increases the whole supply chain profit.Keywords: three-stage supply chain, product quality improvement, channel coordination, revenue sharing
Procedia PDF Downloads 1833962 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”
Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani
Abstract:
The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density
Procedia PDF Downloads 1243961 Redesigning the Plant Distribution of an Industrial Laundry in Arequipa
Authors: Ana Belon Hercilla
Abstract:
The study is developed in “Reactivos Jeans” company, in the city of Arequipa, whose main business is the laundry of garments at an industrial level. In 2012 the company initiated actions to provide a dry cleaning service of alpaca fiber garments, recognizing that this item is in a growth phase in Peru. Additionally this company took the initiative to use a new greenwashing technology which has not yet been developed in the country. To accomplish this, a redesign of both the process and the plant layout was required. For redesigning the plant, the methodology used was the Systemic Layout Planning, allowing this study divided into four stages. First stage is the information gathering and evaluation of the initial situation of the company, for which a description of the areas, facilities and initial equipment, distribution of the plant, the production process and flows of major operations was made. Second stage is the development of engineering techniques that allow the logging and analysis procedures, such as: Flow Diagram, Route Diagram, DOP (process flowchart), DAP (analysis diagram). Then the planning of the general distribution is carried out. At this stage, proximity factors of the areas are established, the Diagram Paths (TRA) is developed, and the Relational Diagram Activities (DRA). In order to obtain the General Grouping Diagram (DGC), further information is complemented by a time study and Guerchet method is used to calculate the space requirements for each area. Finally, the plant layout redesigning is presented and the implementation of the improvement is made, making it possible to obtain a model much more efficient than the initial design. The results indicate that the implementation of the new machinery, the adequacy of the plant facilities and equipment relocation resulted in a reduction of the production cycle time by 75.67%, routes were reduced by 68.88%, the number of activities during the process were reduced by 40%, waits and storage were removed 100%.Keywords: redesign, time optimization, industrial laundry, greenwashing
Procedia PDF Downloads 3943960 Development and Validation of Work Movement Task Analysis: Part 1
Authors: Mohd Zubairy Bin Shamsudin
Abstract:
Work-related Musculoskeletal Disorder (WMSDs) is one of the occupational health problems encountered by workers over the world. In Malaysia, there is increasing in trend over the years, particularly in the manufacturing sectors. Current method to observe workplace WMSDs is self-report questionnaire, observation and direct measurement. Observational method is most frequently used by the researcher and practitioner because of the simplified, quick and versatile when it applies to the worksite. However, there are some limitations identified e.g. some approach does not cover a wide spectrum of biomechanics activity and not sufficiently sensitive to assess the actual risks. This paper elucidates the development of Work Movement Task Analysis (WMTA), which is an observational tool for industrial practitioners’ especially untrained personnel to assess WMSDs risk factors and provide a basis for suitable intervention. First stage of the development protocol involved literature reviews, practitioner survey, tool validation and reliability. A total of six themes/comments were received in face validity stage. New revision of WMTA consisted of four sections of postural (neck, back, shoulder, arms, and legs) and associated risk factors; movement, load, coupling and basic environmental factors (lighting, noise, odorless, heat and slippery floor). For inter-rater reliability study shows substantial agreement among rater with K = 0.70. Meanwhile, WMTA validation shows significant association between WMTA score and self-reported pain or discomfort for the back, shoulder&arms and knee&legs with p<0.05. This tool is expected to provide new workplace ergonomic observational tool to assess WMSDs for the next stage of the case study.Keywords: assessment, biomechanics, musculoskeletal disorders, observational tools
Procedia PDF Downloads 4693959 Investigation of the Material Behaviour of Polymeric Interlayers in Broken Laminated Glass
Authors: Martin Botz, Michael Kraus, Geralt Siebert
Abstract:
The use of laminated glass gains increasing importance in structural engineering. For safety reasons, at least two glass panes are laminated together with a polymeric interlayer. In case of breakage of one or all of the glass panes, the glass fragments are still connected to the interlayer due to adhesion forces and a certain residual load-bearing capacity is left in the system. Polymer interlayers used in the laminated glass show a viscoelastic material behavior, e.g. stresses and strains in the interlayer are dependent on load duration and temperature. In the intact stage only small strains appear in the interlayer, thus the material can be described in a linear way. In the broken stage, large strains can appear and a non-linear viscoelasticity material theory is necessary. Relaxation tests on two different types of polymeric interlayers are performed at different temperatures and strain amplitudes to determine the border to the non-linear material regime. Based on the small-scale specimen results further tests on broken laminated glass panes are conducted. So-called ‘through-crack-bending’ (TCB) tests are performed, in which the laminated glass has a defined crack pattern. The test set-up is realized in a way that one glass layer is still able to transfer compressive stresses but tensile stresses have to be transferred by the interlayer solely. The TCB-tests are also conducted under different temperatures but constant force (creep test). Aims of these experiments are to elaborate if the results of small-scale tests on the interlayer are transferable to a laminated glass system in the broken stage. In this study, limits of the applicability of linear-viscoelasticity are established in the context of two commercially available polymer-interlayers. Furthermore, it is shown that the results of small-scale tests agree to a certain degree to the results of the TCB large-scale experiments. In a future step, the results can be used to develop material models for the post breakage performance of laminated glass.Keywords: glass breakage, laminated glass, relaxation test, viscoelasticity
Procedia PDF Downloads 121