Search results for: productivity measurement
666 Monitoring of Water Quality Using Wireless Sensor Network: Case Study of Benue State of Nigeria
Authors: Desmond Okorie, Emmanuel Prince
Abstract:
Availability of portable water has been a global challenge especially to the developing continents/nations such as Africa/Nigeria. The World Health Organization WHO has produced the guideline for drinking water quality GDWQ which aims at ensuring water safety from source to consumer. Portable water parameters test include physical (colour, odour, temperature, turbidity), chemical (PH, dissolved solids) biological (algae, plytoplankton). This paper discusses the use of wireless sensor networks to monitor water quality using efficient and effective sensors that have the ability to sense, process and transmit sensed data. The integration of wireless sensor network to a portable sensing device offers the feasibility of sensing distribution capability, on site data measurements and remote sensing abilities. The current water quality tests that are performed in government water quality institutions in Benue State Nigeria are carried out in problematic locations that require taking manual water samples to the institution laboratory for examination, to automate the entire process based on wireless sensor network, a system was designed. The system consists of sensor node containing one PH sensor, one temperature sensor, a microcontroller, a zigbee radio and a base station composed by a zigbee radio and a PC. Due to the advancement of wireless sensor network technology, unexpected contamination events in water environments can be observed continuously. local area network (LAN) wireless local area network (WLAN) and internet web-based also commonly used as a gateway unit for data communication via local base computer using standard global system for mobile communication (GSM). The improvement made on this development show a water quality monitoring system and prospect for more robust and reliable system in the future.Keywords: local area network, Ph measurement, wireless sensor network, zigbee
Procedia PDF Downloads 173665 Advances in Genome Editing and Future Prospects for Sorghum Improvement: A Review
Authors: Micheale Yifter Weldemichael, Hailay Mehari Gebremedhn, Teklehaimanot Hailesslasie Teklu
Abstract:
Recent developments in targeted genome editing accelerated genetic research and opened new potentials to improve crops for better yields and quality. Given the significance of cereal crops as a primary source of food for the global population, the utilization of contemporary genome editing techniques like CRISPR/Cas9 is timely and crucial. CRISPR/Cas technology has enabled targeted genomic modifications, revolutionizing genetic research and exploration. Application of gene editing through CRISPR/Cas9 in enhancing sorghum is particularly vital given the current ecological, environmental, and agricultural challenges exacerbated by climate change. As sorghum is one of the main staple foods of our region and is known to be a resilient crop with a high potential to overcome the above challenges, the application of genome editing technology will enhance the investigation of gene functionality. CRISPR/Cas9 enables the improvement of desirable sorghum traits, including nutritional value, yield, resistance to pests and diseases, and tolerance to various abiotic stresses. Furthermore, CRISPR/Cas9 has the potential to perform intricate editing and reshape the existing elite sorghum varieties, and introduce new genetic variations. However, current research primarily focuses on improving the efficacy of the CRISPR/Cas9 system in successfully editing endogenous sorghum genes, making it a feasible and successful undertaking in sorghum improvement. Recent advancements and developments in CRISPR/Cas9 techniques have further empowered researchers to modify additional genes in sorghum with greater efficiency. Successful application and advancement of CRISPR techniques in sorghum will aid not only in gene discovery and the creation of novel traits that regulate gene expression and functional genomics but also in facilitating site-specific integration events. The purpose of this review is, therefore, to elucidate the current advances in sorghum genome editing and highlight its potential in addressing food security issues. It also assesses the efficiency of CRISPR-mediated improvement and its long-term effects on crop improvement and host resistance against parasites, including tissue-specific activity and the ability to induce resistance. This review ends by emphasizing the challenges and opportunities of CRISPR technology in combating parasitic plants and proposing directions for future research to safeguard global agricultural productivity.Keywords: CRISPR/Cas9, genome editing, quality, sorghum, stress, yield
Procedia PDF Downloads 40664 An Experimental Investigation of Rehabilitation and Strengthening of Reinforced Concrete T-Beams Under Static Monotonic Increasing Loading
Authors: Salem Alsanusi, Abdulla Alakad
Abstract:
An experimental investigation to study the behaviour of under flexure reinforced concrete T-Beams. Those Beams were loaded to pre-designated stress levels as percentage of calculated collapse loads. Repairing these beans by either reinforced concrete jacket, or by externally bolted steel plates were utilized. Twelve full scale beams were tested in this experimental program scheme. Eight out of the twelve beams were loaded under different loading levels. Tests were performed for the beams before and after repair with Reinforced Concrete Jacket (RCJ). The applied Load levels were 60%, 77% and 100% of the calculated collapse loads. The remaining four beams were tested before and after repair with Bolted Steel Plate (BSP). Furthermore, out previously mentioned four beams two beams were loaded to the calculated failure load 100% and the remaining two beams were not subjected to any load. The eight beams recorded for the RCJ test were repaired using reinforced concrete jacket. The four beams recorded for the BSP test were all repaired using steel plate at the bottom. All the strengthened beams were gradually loaded until failure occurs. However, in each loading case, the beams behaviour, before and after strengthening, were studied through close inspection of the cracking propagation, and by carrying out an extensive measurement of deformations and strength. The stress-strain curve for reinforcing steel and the failure strains measured in the tests were utilized in the calculation of failure load for the beams before and after strengthening. As a result, the calculated failure loads were close to the actual failure tests in case of beams before repair, ranging from 85% to 90% and also in case of beams repaired by reinforced concrete jacket ranging from 70% to 85%. The results were in case of beams repaired by bolted steel plates ranging from (50% to 85%). It was observed that both jacketing and bolted steel plate methods could effectively restore the full flexure capacity of the damaged beams. However, the reinforced jacket has increased the failure load by about 67%, whereas the bolted steel plates recovered the failure load.Keywords: rehabilitation, strengthening, reinforced concrete, beams deflection, bending stresses
Procedia PDF Downloads 306663 Performance of Reinforced Concrete Beams under Different Fire Durations
Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam
Abstract:
Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire
Procedia PDF Downloads 142662 Use of Radiation Chemistry Instrumental Neutron Activation Analysis (INAA) and Atomic Absorption Spectroscopy (AAS) for the Elemental Analysis Medicinal Plants from India Used in the Treatment of Heart Diseases
Authors: B. M. Pardeshi
Abstract:
Introduction: Minerals and trace elements are chemical elements required by our bodies for numerous biological and physiological processes that are necessary for the maintenance of health. Medicinal plants are highly beneficial for the maintenance of good health and prevention of diseases. They are known as potential sources of minerals and vitamins. 30 to 40% of today’s conventional drugs used in the medicinal and curative properties of various plants are employed in herbal supplement botanicals, nutraceuticals and drug. Aim: The authors explored the mineral element content of some herbs, because mineral elements may have significant role in the development and treatment of gastrointestinal diseases, and a close connection between the presence or absence of mineral elements and inflammatory mediators was noted. Methods: Present study deals with the elemental analysis of medicinal plants by Instrumental Neutron activation Analysis and Atomic Absorption Spectroscopy. Medicinal herbals prescribed for skin diseases were purchased from markets and were analyzed by Instrumental Neutron Activation Analysis (INAA) using 252Cf Californium spontaneous fission neutron source (flux * 109 n s-1) and the induced activities were counted by γ-ray spectrometry and Atomic Absorption Spectroscopy (AAS) techniques (Perkin Elmer 3100 Model) available at Department of Chemistry University of Pune, INDIA, was used for the measurement of major, minor and trace elements. Results: 15 elements viz. Al, K, Cl, Na, Mn by INAA and Cu, Co, Pb, Ni, Cr, Ca, Fe, Zn, Hg and Cd by AAS were analyzed from different medicinal plants from India. A critical examination of the data shows that the elements Ca , K, Cl, Al, and Fe are found to be present at major levels in most of the samples while the other elements Na, Mn, Cu, Co, Pb, Ni, Cr, Ca, Zn, Hg and Cd are present in minor or trace levels. Conclusion: The beneficial therapeutic effect of the studied herbs may be related to their mineral element content. The elemental concentration in different medicinal plants is discussed.Keywords: instrumental neutron activation analysis, atomic absorption spectroscopy, medicinal plants, trace elemental analysis, mineral contents
Procedia PDF Downloads 331661 Sustainable Development and Modern Challenges of Higher Educational Institutions in the Regions of Georgia
Authors: Natia Tsiklashvili, Tamari Poladashvili
Abstract:
Education is one of the fundamental factors of economic prosperity in all respects. It is impossible to talk about the sustainable economic development of the country without substantial investments in human capital and investment into higher educational institutions. Education improves the standard of living of the population and expands the opportunities to receive more benefits, which will be equally important for both the individual and the society as a whole. There are growing initiatives among educated people such as entrepreneurship, technological development, etc. At the same time, the distribution of income between population groups is improving. The given paper discusses the scientific literature in the field of sustainable development through higher educational institutions. Scholars of economic theory emphasize a few major aspects that show the role of higher education in economic growth: a) Alongside education, human capital gradually increases which leads to increased competitiveness of the labor force, not only in the national but also in the international labor market (Neoclassical growth theory), b) The high level of education can increase the efficiency of the economy, investment in human capital, innovation, and knowledge are significant contributors to economic growth. Hence, it focuses on positive externalities and spillover effects of a knowledge-based economy which leads to economic development (endogenous growth theory), c) Education can facilitate the diffusion and transfer of knowledge. Hence, it supports macroeconomic sustainability and microeconomic conditions of individuals. While discussing the economic importance of education, we consider education as the spiritual development of the human that advances general skills, acquires a profession, and improves living conditions. Scholars agree that human capital is not only money but liquid assets, stocks, and competitive knowledge. The last one is the main lever in the context of increasing human competitiveness and high productivity. To address the local issues, the present article researched ten educational institutions across Georgia, including state and private HEIs. Qualitative research was done by analyzing in-depth interweaves of representatives from each institution, and respondents were rectors/vice-rectors/heads of quality assurance service at the institute. The result shows that there is a number of challenges that institution face in order to maintain sustainable development and be the strong links to education and the labor market. Mostly it’s contacted with bureaucracy, insufficient finances they receive, and local challenges that differ across the regions.Keywords: higher education, higher educational institutions, sustainable development, regions, Georgia
Procedia PDF Downloads 85660 Returning to Work: A Qualitative Exploratory Study of Head and Neck Cancer Survivor Disability and Experience
Authors: Abi Miller, Eleanor Wilson, Claire Diver
Abstract:
Background: UK Head and Neck Cancer incidence and prevalence were rising related to better treatment outcomes and changed demographics. More people of working-age now survive Head and Neck Cancer. For individuals, work provides income, purpose, and social connection. For society, work increases economic productivity and reduces welfare spending. In the UK, a cancer diagnosis is classed as a disability and more disabled people leave the workplace than non-disabled people. Limited evidence exists on return-to-work after Head and Neck Cancer, with no UK qualitative studies. Head and Neck Cancer survivors appear to return to work less when compared to other cancer survivors. This study aimed to explore the effects of Head and Neck Cancer disability on survivors’ return-to-work experience. Methodologies: This was an exploratory qualitative study using a critical realist approach to carry out semi-structured one-off interviews with Head and Neck Cancer survivors who had returned to work. Interviews were informed by an interview guide and carried out remotely by Microsoft Teams or telephone. Interviews were transcribed verbatim, pseudonyms allocated, and transcripts anonymized. Data were interpreted using Reflexive Thematic Analysis. Findings: Thirteen Head and Neck Cancer survivors aged between 41 -63 years participated in interviews. Three major themes were derived from the data: changed identity and meaning of work after Head and Neck Cancer, challenging and supportive work experiences and impact of healthcare professionals on return-to-work. Participants described visible physical appearance changes, speech and eating challenges, mental health difficulties and psycho-social shifts following Head and Neck Cancer. These factors affected workplace re-integration, ability to carry out work duties, and work relationships. Most participants experienced challenging work experiences, including stigmatizing workplace interactions and poor communication from managers or colleagues, which further affected participant confidence and mental health. Many participants experienced job change or loss, related both to Head and Neck Cancer and living through a pandemic. A minority of participants experienced strategies like phased return, which supported workplace re-integration. All participants, bar one, wanted conversations with healthcare professionals about return-to-work but perceived these conversations as absent. Conclusion: All participants found returning to work after Head and Neck Cancer to be a challenging experience. This appears to be impacted by participant physical, psychological, and functional disability following Head and Neck Cancer, work interaction and work context.Keywords: disability, experience, head and neck cancer, qualitative, return-to-work
Procedia PDF Downloads 118659 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare
Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl
Abstract:
Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval
Procedia PDF Downloads 202658 Experimental Verification of Similarity Criteria for Sound Absorption of Perforated Panels
Authors: Aleksandra Majchrzak, Katarzyna Baruch, Monika Sobolewska, Bartlomiej Chojnacki, Adam Pilch
Abstract:
Scaled modeling is very common in the areas of science such as aerodynamics or fluid mechanics, since defining characteristic numbers enables to determine relations between objects under test and their models. In acoustics, scaled modeling is aimed mainly at investigation of room acoustics, sound insulation and sound absorption phenomena. Despite such a range of application, there is no method developed that would enable scaling acoustical perforated panels freely, maintaining their sound absorption coefficient in a desired frequency range. However, conducted theoretical and numerical analyses have proven that it is not physically possible to obtain given sound absorption coefficient in a desired frequency range by directly scaling only all of the physical dimensions of a perforated panel, according to a defined characteristic number. This paper is a continuation of the research mentioned above and presents practical evaluation of theoretical and numerical analyses. The measurements of sound absorption coefficient of perforated panels were performed in order to verify previous analyses and as a result find the relations between full-scale perforated panels and their models which will enable to scale them properly. The measurements were conducted in a one-to-eight model of a reverberation chamber of Technical Acoustics Laboratory, AGH. Obtained results verify theses proposed after theoretical and numerical analyses. Finding the relations between full-scale and modeled perforated panels will allow to produce measurement samples equivalent to the original ones. As a consequence, it will make the process of designing acoustical perforated panels easier and will also lower the costs of prototypes production. Having this knowledge, it will be possible to emulate in a constructed model panels used, or to be used, in a full-scale room more precisely and as a result imitate or predict the acoustics of a modeled space more accurately.Keywords: characteristic numbers, dimensional analysis, model study, scaled modeling, sound absorption coefficient
Procedia PDF Downloads 196657 Effectiveness of Technology Enhanced Learning in Orthodontic Teaching
Authors: Mohammed Shaath
Abstract:
Aims Technological advancements in teaching and learning have made significant improvements over the past decade and have been incorporated in institutions to aid the learner’s experience. This review aims to assess whether Technology Enhanced Learning (TEL) pedagogy is more effective at improving students’ attitude and knowledge retention in orthodontic training than traditional methods. Methodology The searches comprised Systematic Reviews (SRs) related to the comparison of TEL and traditional teaching methods from the following databases: PubMed, SCOPUS, Medline, and Embase. One researcher performed the screening, data extraction, and analysis and assessed the risk of bias and quality using A Measurement Tool to Assess Systematic Reviews 2 (AMSTAR-2). Kirkpatrick’s 4-level evaluation model was used to evaluate the educational values. Results A sum of 34 SRs was identified after the removal of duplications and irrelevant SRs; 4 fit the inclusion criteria. On Level 1, students showed positivity to TEL methods, although acknowledging that the harder the platforms to use, the less favourable. Nonetheless, the students still showed high levels of acceptability. Level 2 showed there is no significant overall advantage of increased knowledge when it comes to TEL methods. One SR showed that certain aspects of study within orthodontics deliver a statistical improvement with TEL. Level 3 was the least reported on. Results showed that if left without time restrictions, TEL methods may be advantageous. Level 4 shows that both methods are equally as effective, but TEL has the potential to overtake traditional methods in the future as a form of active, student-centered approach. Conclusion TEL has a high level of acceptability and potential to improve learning in orthodontics. Current reviews have potential to be improved, but the biggest aspect that needs to be addressed is the primary study, which shows a lower level of evidence and heterogeneity in their results. As it stands, the replacement of traditional methods with TEL cannot be fully supported in an evidence-based manner. The potential of TEL methods has been recognized and is already starting to show some evidence of the ability to be more effective in some aspects of learning to cater for a more technology savvy generation.Keywords: TEL, orthodontic, teaching, traditional
Procedia PDF Downloads 42656 Using Hyperspectral Camera and Deep Learning to Identify the Ripeness of Sugar Apples
Authors: Kuo-Dung Chiou, Yen-Xue Chen, Chia-Ying Chang
Abstract:
This study uses AI technology to establish an expert system and establish a fruit appearance database for pineapples and custard apples. It collects images based on appearance defects and fruit maturity. It uses deep learning to detect the location of the fruit and can detect the appearance of the fruit in real-time. Flaws and maturity. In addition, a hyperspectral camera was used to scan pineapples and custard apples, and the light reflection at different frequency bands was used to find the key frequency band for pectin softening in post-ripe fruits. Conducted a large number of multispectral image collection and data analysis to establish a database of Pineapple Custard Apple and Big Eyed Custard Apple, which includes a high-definition color image database, a hyperspectral database in the 377~1020 nm frequency band, and five frequency band images (450, 500, 670, 720, 800nm) multispectral database, which collects 4896 images and manually labeled ground truth; 26 hyperspectral pineapple custard apple fruits (520 images each); multispectral custard apple 168 fruits (5 images each). Using the color image database to train deep learning Yolo v4's pre-training network architecture and adding the training weights established by the fruit database, real-time detection performance is achieved, and the recognition rate reaches over 97.96%. We also used multispectral to take a large number of continuous shots and calculated the difference and average ratio of the fruit in the 670 and 720nm frequency bands. They all have the same trend. The value increases until maturity, and the value will decrease after maturity. Subsequently, the sub-bands will be added to analyze further the numerical analysis of sugar content and moisture, and the absolute value of maturity and the data curve of maturity will be found.Keywords: hyperspectral image, fruit firmness, deep learning, automatic detection, automatic measurement, intelligent labor saving
Procedia PDF Downloads 1655 Optimization of MAG Welding Process Parameters Using Taguchi Design Method on Dead Mild Steel
Authors: Tadele Tesfaw, Ajit Pal Singh, Abebaw Mekonnen Gezahegn
Abstract:
Welding is a basic manufacturing process for making components or assemblies. Recent welding economics research has focused on developing the reliable machinery database to ensure optimum production. Research on welding of materials like steel is still critical and ongoing. Welding input parameters play a very significant role in determining the quality of a weld joint. The metal active gas (MAG) welding parameters are the most important factors affecting the quality, productivity and cost of welding in many industrial operations. The aim of this study is to investigate the optimization process parameters for metal active gas welding for 60x60x5mm dead mild steel plate work-piece using Taguchi method to formulate the statistical experimental design using semi-automatic welding machine. An experimental study was conducted at Bishoftu Automotive Industry, Bishoftu, Ethiopia. This study presents the influence of four welding parameters (control factors) like welding voltage (volt), welding current (ampere), wire speed (m/min.), and gas (CO2) flow rate (lit./min.) with three different levels for variability in the welding hardness. The objective functions have been chosen in relation to parameters of MAG welding i.e., welding hardness in final products. Nine experimental runs based on an L9 orthogonal array Taguchi method were performed. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the welding characteristics of dead mild steel plate and used in order to obtain optimum levels for every input parameter at 95% confidence level. The optimal parameters setting was found is welding voltage at 22 volts, welding current at 125 ampere, wire speed at 2.15 m/min and gas flow rate at 19 l/min by using the Taguchi experimental design method within the constraints of the production process. Finally, six conformations welding have been carried out to compare the existing values; the predicated values with the experimental values confirm its effectiveness in the analysis of welding hardness (quality) in final products. It is found that welding current has a major influence on the quality of welded joints. Experimental result for optimum setting gave a better hardness of welding condition than initial setting. This study is valuable for different material and thickness variation of welding plate for Ethiopian industries.Keywords: Weld quality, metal active gas welding, dead mild steel plate, orthogonal array, analysis of variance, Taguchi method
Procedia PDF Downloads 481654 Polypyrrole Integrated MnCo2O4 Nanorods Hybrid as Electrode Material for High Performance Supercapacitor
Authors: Santimoy Khilari, Debabrata Pradhan
Abstract:
Ever−increasing energy demand and growing energy crisis along with environmental issues emphasize the research on sustainable energy conversion and storage systems. Recently, supercapacitors or electrochemical capacitors emerge as a promising energy storage technology for future generation. The activity of supercapacitors generally depends on the efficiency of its electrode materials. So, the development of cost−effective efficient electrode materials for supercapacitors is one of the challenges to the scientific community. Transition metal oxides with spinel crystal structure receive much attention for different electrochemical applications in energy storage/conversion devices because of their improved performance as compared to simple oxides. In the present study, we have synthesized polypyrrole (PPy) supported manganese cobaltite nanorods (MnCo2O4 NRs) hybrid electrode material for supercapacitor application. The MnCo2O4 NRs were synthesized by a simple hydrothermal and calcination approach. The MnCo2O4 NRs/PPy hybrid was prepared by in situ impregnation of MnCo2O4 NRs during polymerization of pyrrole. The surface morphology and microstructure of as−synthesized samples was characterized by scanning electron microscopy and transmission electron microscopy, respectively. The crystallographic phase of MnCo2O4 NRs, PPy and hybrid was determined by X-ray diffraction. Electrochemical charge storage activity of MnCo2O4 NRs, PPy and MnCo2O4 NRs/PPy hybrid was evaluated from cyclic voltammetry, chronopotentiometry and electrochemical impedance spectroscopy. Significant improvement of specific capacitance was achieved in MnCo2O4 NRs/PPy hybrid as compared to the individual components. Furthermore, the mechanically mixed MnCo2O4 NRs, and PPy shows lower specific capacitance as compared to MnCo2O4 NRs/PPy hybrid suggesting the importance of in situ hybrid preparation. The stability of as prepared electrode materials was tested by cyclic charge-discharge measurement for 1000 cycles. Maximum 94% capacitance was retained with MnCo2O4 NRs/PPy hybrid electrode. This study suggests that MnCo2O4 NRs/PPy hybrid can be used as a low cost electrode material for charge storage in supercapacitors.Keywords: supercapacitors, nanorods, spinel, MnCo2O4, polypyrrole
Procedia PDF Downloads 340653 Distance and Coverage: An Assessment of Location-Allocation Models for Fire Stations in Kuwait City, Kuwait
Authors: Saad M. Algharib
Abstract:
The major concern of planners when placing fire stations is finding their optimal locations such that the fire companies can reach fire locations within reasonable response time or distance. Planners are also concerned with the numbers of fire stations that are needed to cover all service areas and the fires, as demands, with standard response time or distance. One of the tools for such analysis is location-allocation models. Location-allocation models enable planners to determine the optimal locations of facilities in an area in order to serve regional demands in the most efficient way. The purpose of this study is to examine the geographic distribution of the existing fire stations in Kuwait City. This study utilized location-allocation models within the Geographic Information System (GIS) environment and a number of statistical functions to assess the current locations of fire stations in Kuwait City. Further, this study investigated how well all service areas are covered and how many and where additional fire stations are needed. Four different location-allocation models were compared to find which models cover more demands than the others, given the same number of fire stations. This study tests many ways to combine variables instead of using one variable at a time when applying these models in order to create a new measurement that influences the optimal locations for locating fire stations. This study also tests how location-allocation models are sensitive to different levels of spatial dependency. The results indicate that there are some districts in Kuwait City that are not covered by the existing fire stations. These uncovered districts are clustered together. This study also identifies where to locate the new fire stations. This study provides users of these models a new variable that can assist them to select the best locations for fire stations. The results include information about how the location-allocation models behave in response to different levels of spatial dependency of demands. The results show that these models perform better with clustered demands. From the additional analysis carried out in this study, it can be concluded that these models applied differently at different spatial patterns.Keywords: geographic information science, GIS, location-allocation models, geography
Procedia PDF Downloads 178652 Body Composition Analysis of University Students by Anthropometry and Bioelectrical Impedance Analysis
Authors: Vinti Davar
Abstract:
Background: Worldwide, at least 2.8 million people die each year as a result of being overweight or obese, and 35.8 million (2.3%) of global DALYs are caused by overweight or obesity. Obesity is acknowledged as one of the burning public health problems reducing life expectancy and quality of life. The body composition analysis of the university population is essential in assessing the nutritional status, as well as the risk of developing diseases associated with abnormal body fat content so as to make nutritional recommendations. Objectives: The main aim was to determine the prevalence of obesity and overweight in University students using Anthropometric analysis and BIA methods Material and Methods: In this cross-sectional study, 283 university students participated. The body composition analysis was undertaken by using mainly: i) Anthropometric Measurement: Height, Weight, BMI, waist circumference, hip circumference and skin fold thickness, ii) Bio-electrical impedance was used for analysis of body fat mass, fat percent and visceral fat which was measured by Tanita SC-330P Professional Body Composition Analyzer. The data so collected were compiled in MS Excel and analyzed for males and females using SPSS 16.Results and Discussion: The mean age of the male (n= 153) studied subjects was 25.37 ±2.39 year and females (n=130) was 22.53 ±2.31. The data of BIA revealed very high mean fat per cent of the female subjects i.e. 30.3±6.5 per cent whereas mean fat per cent of the male subjects was 15.60±6.02 per cent indicating a normal body fat range. The findings showed high visceral fat of both males (12.92±3.02) and females (16.86±4.98). BMI, BF% and WHR were higher among females, and BMI was higher among males. The most evident correlation was verified between BF% and WHR for female students (r=0.902; p<0.001). The correlation of BFM and BF% with thickness of triceps, sub scapular and abdominal skin folds and BMI was significant (P<0.001). Conclusion: The studied data made it obvious that there is a need to initiate lifestyle changing strategies especially for adult females and encourage them to improve their dietary intake to prevent incidence of non communicable diseases due to obesity and high fat percentage.Keywords: anthropometry, bioelectrical impedance, body fat percentage, obesity
Procedia PDF Downloads 381651 Analysis of Radiation-Induced Liver Disease (RILD) and Evaluation of Relationship between Therapeutic Activity and Liver Clearance Rate with Tc-99m-Mebrofenin in Yttrium-90 Microspheres Treatment
Authors: H. Tanyildizi, M. Abuqebitah, I. Cavdar, M. Demir, L. Kabasakal
Abstract:
Aim: Whole liver radiation has the modest benefit in the treatment of unresectable hepatic metastases but the radiation doses must keep in control. Otherwise, RILD complications may arise. In this study, we aimed to calculate amount of maximum permissible activity (MPA) and critical organ absorbed doses with MIRD methodology, to evaluate tumour doses for treatment response and whole liver doses for RILD and to find optimal liver function test additionally. Materials and Methods: This study includes 29 patients who attended our nuclear medicine department suffering from Y-90 microspheres treatment. 10 mCi Tc-99m MAA was applied to the patients for dosimetry via IV. After the injection, whole body SPECT/CT images were taken in one hour. The minimum therapeutic tumour dose is on the point of being 120 Gy1, the amount of activities were calculated with MIRD methodology considering volumetric tumour/liver rate. A sub-working group was created with 11 patients randomly and liver clearance rate with Tc-99m-Mebrofenin was calculated according to Ekman formalism. Results: The volumetric tumour/liver rates were found between 33-66% (Maksimum Tolarable Dose (MTD) 48-52Gy3) for 4 patients, were found less than 33% (MTD 72Gy3) for 25 patients. According to these results the average amount of activity, mean liver dose and mean tumour dose were found 1793.9±1.46 MBq, 32.86±0.19 Gy, and 138.26±0.40 Gy. RILD was not observed in any patient. In sub-working group, the relationship between Bilirubin, Albumin, INR (which show presence of liver disease and its degree), liver clearance with Tc-99m-Mebrofenin and calculated activity amounts were found r=0.49, r=0.27, r=0.43, r=0.57, respectively. Discussions: The minimum tumour dose was found 120 Gy for positive dose-response relation. If volumetric tumour/liver rate was > 66%, dose 30 Gy; if volumetric tumour/liver rate 33-66%, dose escalation 48 Gy; if volumetric tumour/liver rate < 33%, dose 72 Gy. These dose limitations did not create RILD. Clearance measurement with Mebrofenin was concluded that the best method to determine the liver function. Therefore, liver clearance rate with Tc-99m-Mebrofenin should be considered in calculation of yttrium-90 microspheres dosimetry.Keywords: clearance, dosimetry, liver, RILD
Procedia PDF Downloads 440650 Exposure to Ionizing Radiation Resulting from the Chernobyl Fallout and Childhood Cardiac Arrhythmia: A Population Based Study
Authors: Geraldine Landon, Enora Clero, Jean-Rene Jourdain
Abstract:
In 2005, the Institut de Radioprotection et de Sûreté Nucléaire (IRSN, France) launched a research program named EPICE (acronym for 'Evaluation of Pathologies potentially Induced by CaEsium') to collect scientific information on non-cancer effects possibly induced by chronic exposures to low doses of ionizing radiation with the view of addressing a question raised by several French NGOs related to health consequences of the Chernobyl nuclear accident in children. The implementation of the program was preceded by a pilot phase to ensure that the project would be feasible and determine the conditions for implementing an epidemiological study on a population of several thousand children. The EPICE program focused on childhood cardiac arrhythmias started in May 2009 for 4 years, in partnership with the Russian Bryansk Diagnostic Center. The purpose of this cross-sectional study was to determine the prevalence of cardiac arrhythmias in the Bryansk oblast (depending on the contamination of the territory and the caesium-137 whole-body burden) and to assess whether caesium-137 was or not a factor associated with the onset of cardiac arrhythmias. To address these questions, a study bringing together 18 152 children aged 2 to 18 years was initiated; each child received three medical examinations (ECG, echocardiography, and caesium-137 whole-body activity measurement) and some of them were given with a 24-hour Holter monitoring and blood tests. The findings of the study, currently submitted to an international journal justifying that no results can be given at this step, allow us to answer clearly to the issue of radiation-induced childhood arrhythmia, a subject that has been debated for many years. Our results will be certainly helpful for health professionals responsible for the monitoring of population exposed to the releases from the Fukushima Dai-ichi nuclear power plant and also useful for future comparative study in children exposed to ionizing radiation in other contexts, such as cancer radiation therapies.Keywords: Caesium-137, cardiac arrhythmia, Chernobyl, children
Procedia PDF Downloads 245649 Smart BIM Documents - the Development of the Ontology-Based Tool for Employer Information Requirements (OntEIR), and its Transformation into SmartEIR
Authors: Shadan Dwairi
Abstract:
Defining proper requirements is one of the key factors for a successful construction projects. Although there have been many attempts put forward in assist in identifying requirements, but still this area is under developed. In Buildings Information Modelling (BIM) projects. The Employer Information Requirements (EIR) is the fundamental requirements document and a necessary ingredient in achieving a successful BIM project. The provision on full and clear EIR is essential to achieving BIM Level-2. As Defined by PAS 1192-2, EIR is a “pre-tender document that sets out the information to be delivered and the standards and processes to be adopted by the supplier as part of the project delivery process”. It also notes that “EIR should be incorporated into tender documentation to enable suppliers to produce an initial BIM Execution Plan (BEP)”. The importance of effective definition of EIR lies in its contribution to a better productivity during the construction process in terms of cost and time, in addition to improving the quality of the built asset. Proper and clear information is a key aspect of the EIR, in terms of the information it contains and more importantly the information the client receives at the end of the project that will enable the effective management and operation of the asset, where typically about 60%-80% of the cost is spent. This paper reports on the research done in developing the Ontology-based tool for Employer Information Requirements (OntEIR). OntEIR has proven the ability to produce a full and complete set of EIRs, which ensures that the clients’ information needs for the final model delivered by BIM is clearly defined from the beginning of the process. It also reports on the work being done into transforming OntEIR into a smart tool for Defining Employer Information Requirements (smartEIR). smartEIR transforms the OntEIR tool into enabling it to develop custom EIR- tailored for the: Project Type, Project Requirements, and the Client Capabilities. The initial idea behind smartEIR is moving away from the notion “One EIR fits All”. smartEIR utilizes the links made in OntEIR and creating a 3D matrix that transforms it into a smart tool. The OntEIR tool is based on the OntEIR framework that utilizes both Ontology and the Decomposition of Goals to elicit and extract the complete set of requirements needed for a full and comprehensive EIR. A new ctaegorisation system for requirements is also introduced in the framework and tool, which facilitates the understanding and enhances the clarification of the requirements especially for novice clients. Findings of the evaluation of the tool that was done with experts in the industry, showed that the OntEIR tool contributes towards effective and efficient development of EIRs that provide a better understanding of the information requirements as requested by BIM, and support the production of a complete BIM Execution Plan (BEP) and a Master Information Delivery Plan (MIDP).Keywords: building information modelling, employer information requirements, ontology, web-based, tool
Procedia PDF Downloads 127648 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging
Procedia PDF Downloads 156647 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis
Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab
Abstract:
Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.Keywords: deep neural network, foot disorder, plantar pressure, support vector machine
Procedia PDF Downloads 359646 Comparative Settlement Analysis on the under of Embankment with Empirical Formulas and Settlement Plate Measurement for Reducing Building Crack around of Embankments
Authors: Safitri Nur Wulandari, M. Ivan Adi Perdana, Prathisto L. Panuntun Unggul, R. Dary Wira Mahadika
Abstract:
In road construction on the soft soil, we need a soil improvement method to improve the soil bearing capacity of the land base so that the soil can withstand the traffic loads. Most of the land in Indonesia has a soft soil, where soft soil is a type of clay that has the consistency of very soft to medium stiff, undrained shear strength, Cu <0:25 kg/cm2, or the estimated value of NSPT <5 blows/ft. This study focuses on the analysis of the effect on preloading load (embarkment) to the amount of settlement ratio on the under of embarkment that will impact on the building cracks around of embarkment. The method used in this research is a superposition method for embarkment distribution on 27 locations with undisturbed soil samples at some borehole point in Java and Kalimantan, Indonesia. Then correlating the results of settlement plate monitoring on the field with Asaoka method. The results of settlement plate monitoring taken from an embarkment of Ahmad Yani airport in Semarang on 32 points. Where the value of Cc (index compressible) soil data based on some laboratory test results, while the value of Cc is not tested obtained from empirical formula Ardhana and Mochtar, 1999. From this research, the results of the field monitoring showed almost the same results with an empirical formulation with the standard deviation of 4% where the formulation of the empirical results of this analysis obtained by linear formula. Value empirical linear formula is to determine the effect of compression heap area as high as 4,25 m is 3,1209x + y = 0.0026 for the slope of the embankment 1: 8 for the same analysis with an initial height of embankment on the field. Provided that at the edge of the embankment settlement worth is not equal to 0 but at a quarter of embankment has a settlement ratio average 0.951 and at the edge of embankment has a settlement ratio 0,049. The influence areas around of embankment are approximately 1 meter for slope 1:8 and 7 meters for slope 1:2. So, it can cause the building cracks, to build in sustainable development.Keywords: building cracks, influence area, settlement plate, soft soil, empirical formula, embankment
Procedia PDF Downloads 345645 Establishment and Aging Process Analysis in Dermal Fibroblast Cell Culture of Green Turtle (Chelonia mydas)
Authors: Yemima Dani Riani, Anggraini Barlian
Abstract:
Green turtle (Chelonia mydas) is one of well known long-lived turtle. Its age can reach 100 years old. Senescence in green turtle is an interesting process to study because until now no clear explanation has been established about senescence at cellular or molecular level in this species. Since 1999, green turtle announced as an endangered species. Hence, establishment of fibroblast skin cell culture of green turtle may be material for future study of senescence. One common marker used for detecting senescence is telomere shortening. Reduced telomerase activity, the reverse transcriptase enzyme which adds TTAGGG DNA sequence to telomere end, may also cause senescence. The purpose of this research are establish and identify green turtle fibroblast skin cell culture and also compare telomere length and telomerase activity from passage 5 and 14. Primary cell culture made with primary explant method then cultured in Leibovitz-15 (Sigma) supplemented by 10% Fetal Bovine Serum (Sigma) and 100 U/mL Penicillin/Streptomycin (Sigma) at 30 ± 1oC. Cells identified with Rabbit Anti-Vimentin Polyclonal Antibody (Abcam) and Goat Polyclonal Antibody (Abcam) using confocal microscope (Zeiss LSM 170). Telomere length obtained using TeloTAGGG Telomere Length Assay (Roche) while telomerase activity obtained using TeloTAGGG Telomerase PCR ElisaPlus (Roche). Primary cell culture from green turtle skin had fibroblastic morphology and immunocytochemistry test with vimentin antibody proved the culture was fibroblast cell. Measurement of telomere length and telomerase activity showed that telomere length and telomerase activity of passage 14 was greater than passage 5. However, based on morphology, green turtle fibroblast skin cell culture showed senescent morphology. Based on the analysis of telomere length and telomerase activity, suspected fibroblast skin cell culture of green turtles is not undergo aging through telomere shortening.Keywords: cell culture, chelonia mydas, telomerase, telomere, senescence
Procedia PDF Downloads 425644 Voluntary Water Intake of Flavored Water in Euhydrated Horses
Authors: Brianna M. Soule, Jesslyn A. Bryk-Lucy, Linda M. Ritchie
Abstract:
Colic, defined as abdominal pain in the horse, has several known predisposing factors. Decreased water intake has been shown to predispose equines to impaction colic. The objective of this study was to determine if offering flavored water (sweet feed or banana extract) would increase voluntary water intake in horses to serve as an assessable, noninvasive method for farm managers, veterinarians, or owners to decrease the risk of impaction colic. An a priori power analysis, which was conducted using G*Power version 3.1.9.7, indicated that the minimum sample size required to achieve 80% power for detecting a large effect at a significance level of α = .05 was 19 horses for a one-way repeated measures ANOVA with three treatment levels and assuming a non-sphericity correction of ε=0.5. After a three-day control period, 21 horses were randomly divided into two sequences and offered either banana or sweet feed flavored water. Horses always had a bucket of unflavored water available. A repeated measure study design was used to measure water consumption of each horse over a 62-hour period. A one-way repeated measures ANOVA was conducted to determine whether there were statistically significant differences among the means for the three-day average water intake (ml/kg). Although not statistically significant (F(2, 38) = 1.28, p = .290, partial η2 = .063), the three-day average water intake was largest for banana flavored water (M = 53.51, SD = 9.25 ml/kg), followed by sweet feed (M = 52.93, SD = 11.99 ml/kg), and, finally, unflavored water (M = 50.40, SD = 10.82 ml/kg). Paired-samples t-tests were used to determine whether there was a statistically significant difference between the three-day average water intake (ml/kg) for flavored versus unflavored water. The average unflavored water intake (M = 29.3 ml/kg, SD = 8.9) over the measurement period was greater than the banana flavored water (M = 27.7 ml/kg, SD = 9.8), but the average consumption of the sweet feed flavored water (M = 30.4 ml/kg, SD = 14.6) was greater than unflavored water (M = 24.3 ml/kg, SD = 11.4). None of these differences in average intake were statistically significant (p > .244). Future research is warranted to determine if other flavors significantly increase voluntary water intake in horses.Keywords: colic, equine, equine science, water intake, flavored water, horses, equine management, equine health, horse health, horse health care management, colic prevention
Procedia PDF Downloads 149643 Influence of Ammonia Emissions on Aerosol Formation in Northern and Central Europe
Authors: A. Aulinger, A. M. Backes, J. Bieser, V. Matthias, M. Quante
Abstract:
High concentrations of particles pose a threat to human health. Thus, legal maximum concentrations of PM10 and PM2.5 in ambient air have been steadily decreased over the years. In central Europe, the inorganic species ammonium sulphate and ammonium nitrate make up a large fraction of fine particles. Many studies investigate the influence of emission reductions of sulfur- and nitrogen oxides on aerosol concentration. Here, we focus on the influence of ammonia (NH3) emissions. While emissions of sulphate and nitrogen oxides are quite well known, ammonia emissions are subject to high uncertainty. This is due to the uncertainty of location, amount, time of fertilizer application in agriculture, and the storage and treatment of manure from animal husbandry. For this study, we implemented a crop growth model into the SMOKE emission model. Depending on temperature, local legislation, and crop type individual temporal profiles for fertilizer and manure application are calculated for each model grid cell. Additionally, the diffusion from soils and plants and the direct release from open and closed barns are determined. The emission data was used as input for the Community Multiscale Air Quality (CMAQ) model. Comparisons to observations from the EMEP measurement network indicate that the new ammonia emission module leads to a better agreement of model and observation (for both ammonia and ammonium). Finally, the ammonia emission model was used to create emission scenarios. This includes emissions based on future European legislation, as well as a dynamic evaluation of the influence of different agricultural sectors on particle formation. It was found that a reduction of ammonia emissions by 50% lead to a 24% reduction of total PM2.5 concentrations during winter time in the model domain. The observed reduction was mainly driven by reduced formation of ammonium nitrate. Moreover, emission reductions during winter had a larger impact than during the rest of the year.Keywords: ammonia, ammonia abatement strategies, ctm, seasonal impact, secondary aerosol formation
Procedia PDF Downloads 351642 Biostimulant Activity of Chitooligomers: Effect of Different Degrees of Acetylation and Polymerization on Wheat Seedlings under Salt Stress
Authors: Xiaoqian Zhang, Ping Zou, Pengcheng Li
Abstract:
Salt stress is one of the most serious abiotic stresses, and it can lead to the reduction of agricultural productivity. High salt concentration makes it more difficult for roots to absorb water and disturbs the homeostasis of cellular ions resulting in osmotic stress, ion toxicity and generation of reactive oxygen species (ROS). Compared with the normal physiological conditions, salt stress could inhibit the photosynthesis, break metabolic balance and damage cellular structures, and ultimately results in the reduction of crop yield. Therefore it is vital to develop practical methods for improving the salt tolerance of plants. Chitooligomers (COS) is partially depolymerized products of chitosan, which is consisted of D-glucosamine and N-acetyl-D-glucosamine. In agriculture, COS has the ability to promote plant growth and induce plant innate immunity. The bioactivity of COS closely related to its degree of polymerization (DP) and acetylation (DA). However, most of the previous reports fail to mention the function of COS with different DP and DAs in improving the capacity of plants against salt stress. Accordingly, in this study, chitooligomers (COS) with different degrees of DAs were used to test wheat seedlings response to salt stress. In addition, the determined degrees of polymerization (DPs) COS(DP 4-12) and a heterogeneous COS mixture were applied to explore the relationship between the DP of COSs and its effect on the growth of wheat seedlings in response to salt stress. It showed that COSs, the exogenous elicitor, could promote the growth of wheat seedling, reduce the malondialdehyde (MDA) concentration, and increase the activities of antioxidant enzymes. The results of mRNA expression level test for salt stress-responsive genes indicated that COS keep plants away from being hurt by the salt stress via the regulation of the concentration and the increased antioxidant enzymes activities. Moreover, it was found that the activities of COS was closely related to its Das and COS (DA: 50%) displayed the best salt resistance activity to wheat seedlings. The results also showed that COS with different DP could promote the growth of wheat seedlings under salt stress. COS with a DP (6-8) showed better activities than the other tested samples, implied its activity had a close relationship with its DP. After treatment with chitohexaose, chitoheptaose, and chitooctaose, the photosynthetic parameters were improved obviously. The soluble sugar and proline contents were improved by 26.7%-53.3% and 43.6.0%-70.2%, respectively, while the concentration of malondialdehyde (MDA) was reduced by 36.8% - 49.6%. In addition, the antioxidant enzymes activities were clearly activated. At the molecular level, the results revealed that they could obviously induce the expression of Na+/H+ antiporter genes. In general, these results were fundamental to the study of action mechanism of COS on promoting plant growth under salt stress and the preparation of plant growth regulator.Keywords: chitooligomers (COS), degree of polymerization (DP), degree of acetylation (DA), salt stress
Procedia PDF Downloads 175641 Employee Wellbeing: The Key to Organizational Success
Authors: Crystal Hoole
Abstract:
Employee well-being has become an area of concern for top executives and organizations worldwide. In developing countries such as South Africa, and especially in the educational sector, employees have to deal with anxiety, stress, fear, student protests, political and economic turmoil and excessive work demands on a daily basis. Research has shown that workplaces with higher resilience and better well-being strategies also report higher productivity, increased innovation, better employee retention and better employee engagement. Many organisations offer standard employee assistance programs and once-off short interventions. However, most of these well-being initiatives are perceived as ineffective. Some of the criticism centers around a lack of holistic well-being approaches, no proof of the success of well-being initiatives, not being part of the organization’s strategies and a lack of genuine leadership support. This study attempts to illustrate how a holistic well-being intervention, over a period of 100 days, is far more effective in impacting organizational outcomes. A quasi-experimental design, with a pre-test and pro-test design with a randomization strategy, will be used. Measurements of organizational outcomes are taken at three-time points throughout the study, before, middle and after. The constructs that will be measured are employee engagement, psychological well-being, organizational culture and trust, and perceived stress. The well-being is imitative follows a salutogenesis approach and is aimed at building resilience through focusing on six focal areas, namely sleep, mindful eating, exercise, love, gratitude and appreciation, breath work and mindfulness, and finally, purpose. Certain organizational constructs, including employee engagement, psychological well-being, organizational culture and trust and perceived stress, will be measured at three-time points during the study, namely before, middle and after. A quasi-experimental, pre-test and post-test design will be applied, also using a randomization strategy to limit potential bias. Repeated measure ANCOVA will be used to determine whether any change occurred over the period of 100 days. The study will take place in a Higher Education institution in South Africa. The sample will consist of academic and administrative staff. Participants will be assigned to a test and control group. All participants will complete a survey measuring employee engagement, psychological well-being, organizational culture and trust, and perceived stress. Only the test group will undergo the well-being intervention. The study envisages contributing on several levels: Firstly, the study hopes to find a positive increase in the various well-being indicators of the participants who participated in the study and secondly to illustrate that a longer more holistic approach is successful in improving organisational success (as measured in the various organizational outcomes).Keywords: wellbeing, resilience, organizational success, intervention
Procedia PDF Downloads 101640 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products
Authors: Maciej Jedrzejczyk, Karolina Marzantowicz
Abstract:
Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids
Procedia PDF Downloads 302639 Using Repetition of Instructions in Course Design to Improve Instructor Efficiency and Increase Enrollment in a Large Online Course
Authors: David M. Gilstrap
Abstract:
Designing effective instructions is a critical dimension of effective teaching systems. Due to a void in interpersonal contact, online courses present new challenges in this regard, especially with large class sizes. This presentation is a case study in how the repetition of instructions within the course design was utilized to increase instructor efficiency in managing a rapid rise in enrollment. World of Turf is a two-credit, semester-long elective course for non-turfgrass majors at Michigan State University. It is taught entirely online and solely by the instructor without any graduate teaching assistants. Discussion forums about subject matter are designated for each lecture, and those forums are moderated by a few undergraduate turfgrass majors. The instructions as to the course structure, navigation, and grading are conveyed in the syllabus and course-introduction lecture. Regardless, students email questions about such matters, and the number of emails increased as course enrollment grew steadily during the first three years of its existence, almost to a point that the course was becoming unmanageable. Many of these emails occurred because the instructor was failing to update and operate the course in a timely and proper fashion because he was too busy answering emails. Some of the emails did help the instructor ferret out poorly composed instructions, which he corrected. Beginning in the summer semester of 2015, the instructor overhauled the course by segregating content into weekly modules. The philosophy envisioned and embraced was that there can never be too much repetition of instructions in an online course. Instructions were duplicated within each of these modules as well as associated modules for syllabus and schedules, getting started, frequently asked questions, practice tests, surveys, and exams. In addition, informational forums were created and set aside for questions about the course workings and each of the three exams, thus creating even more repetition. Within these informational forums, students typically answer each other’s questions, which demonstrated to the students that that information is available in the course. When needed, the instructor interjects with corrects answers or clarifies any misinformation which students might be putting forth. Increasing the amount of repetition of instructions and strategic enhancements to the course design have resulted in a dramatic decrease in the number of email replies necessitated by the instructor. The resulting improvement in efficiency allowed the instructor to raise enrollment limits thus effecting a ten-fold increase in enrollment over a five-year period with 1050 students registered during the most recent academic year, thus becoming easily the largest online course at the university. Because of the improvement in course-delivery efficiency, sufficient time was created that allowed the instructor to development and launch an additional online course, hence further enhancing his productivity and value in terms of the number of the student-credit hours for which he is responsible.Keywords: design, efficiency, instructions, online, repetition
Procedia PDF Downloads 209638 Teachers' Beliefs About the Environment: The Case of Azerbaijan
Authors: Aysel Mehdiyeva
Abstract:
As a driving force of society, the role of teachers is important in inspiring, motivating, and encouraging the younger generation to protect the environment. In light of these, the study aims to explore teachers’ beliefs to understand teachers’ engagement with teaching about the environment. Though teachers’ beliefs about the environment have been explored by a number of researchers, the influence of these beliefs in their professional lives and in shaping their classroom instructions has not been widely investigated in Azerbaijan. To this end, this study aims to reveal the beliefs of secondary school geography teachers about the environment and find out the ways teachers’ beliefs of the environment are enacted in their classroom practice in Azerbaijan. Different frameworks have been suggested for measuring environmental beliefs stemming from well-known anthropocentric and biocentric worldviews. The study addresses New Ecological Paradigm (NEP) by Dunlap to formulate the interview questions as discussion with teachers around these questions aligns with the research aims serving to well-capture the beliefs of teachers about the environment. Despite the extensive applicability of the NEP scale, it has not been used to explore in-service teachers’ beliefs about the environment. Besides, it has been used as a tool for quantitative measurement; however, the study addresses the scale within the framework of the qualitative study. The research population for semi-structured interviews and observations was recruited via purposeful sampling. Teachers’ being a unit of analysis is related to the gap in the literature as to how teachers’ beliefs are related to their classroom instructions within the environmental context, as well as teachers’ beliefs about the environment in Azerbaijan have not been well researched. 6 geography teachers from 4 different schools were involved in the research process. The schools are located in one of the most polluted parts of the capital city Baku where the first oil well in the world was drilled in 1848 and is called “Black City” due to the black smoke and smell that covered that part of the city. Semi-structured interviews were conducted with the teachers to reveal their stated beliefs. Later, teachers were observed during geography classes to understand the overlap between teachers’ ideas presented during the interview and their teaching practice. Research findings aim to indicate teachers’ ecological beliefs and practice, as well as elaborate on possible causes of compatibility/incompatibility between teachers’ stated and observed beliefs.Keywords: environmental education, anthropocentric beliefs, biocentric beliefs, new ecological paradigm
Procedia PDF Downloads 107637 Optical Flow Technique for Supersonic Jet Measurements
Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi
Abstract:
This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.Keywords: Schlieren, optical flow, supersonic jets, shock shear layer
Procedia PDF Downloads 312