Search results for: measurement validity
570 An Experimental Investigation of Rehabilitation and Strengthening of Reinforced Concrete T-Beams Under Static Monotonic Increasing Loading
Authors: Salem Alsanusi, Abdulla Alakad
Abstract:
An experimental investigation to study the behaviour of under flexure reinforced concrete T-Beams. Those Beams were loaded to pre-designated stress levels as percentage of calculated collapse loads. Repairing these beans by either reinforced concrete jacket, or by externally bolted steel plates were utilized. Twelve full scale beams were tested in this experimental program scheme. Eight out of the twelve beams were loaded under different loading levels. Tests were performed for the beams before and after repair with Reinforced Concrete Jacket (RCJ). The applied Load levels were 60%, 77% and 100% of the calculated collapse loads. The remaining four beams were tested before and after repair with Bolted Steel Plate (BSP). Furthermore, out previously mentioned four beams two beams were loaded to the calculated failure load 100% and the remaining two beams were not subjected to any load. The eight beams recorded for the RCJ test were repaired using reinforced concrete jacket. The four beams recorded for the BSP test were all repaired using steel plate at the bottom. All the strengthened beams were gradually loaded until failure occurs. However, in each loading case, the beams behaviour, before and after strengthening, were studied through close inspection of the cracking propagation, and by carrying out an extensive measurement of deformations and strength. The stress-strain curve for reinforcing steel and the failure strains measured in the tests were utilized in the calculation of failure load for the beams before and after strengthening. As a result, the calculated failure loads were close to the actual failure tests in case of beams before repair, ranging from 85% to 90% and also in case of beams repaired by reinforced concrete jacket ranging from 70% to 85%. The results were in case of beams repaired by bolted steel plates ranging from (50% to 85%). It was observed that both jacketing and bolted steel plate methods could effectively restore the full flexure capacity of the damaged beams. However, the reinforced jacket has increased the failure load by about 67%, whereas the bolted steel plates recovered the failure load.Keywords: rehabilitation, strengthening, reinforced concrete, beams deflection, bending stresses
Procedia PDF Downloads 308569 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study
Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari
Abstract:
The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.Keywords: energy, system, building, cooling, electrical
Procedia PDF Downloads 575568 Performance of Reinforced Concrete Beams under Different Fire Durations
Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam
Abstract:
Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire
Procedia PDF Downloads 143567 Use of Radiation Chemistry Instrumental Neutron Activation Analysis (INAA) and Atomic Absorption Spectroscopy (AAS) for the Elemental Analysis Medicinal Plants from India Used in the Treatment of Heart Diseases
Authors: B. M. Pardeshi
Abstract:
Introduction: Minerals and trace elements are chemical elements required by our bodies for numerous biological and physiological processes that are necessary for the maintenance of health. Medicinal plants are highly beneficial for the maintenance of good health and prevention of diseases. They are known as potential sources of minerals and vitamins. 30 to 40% of today’s conventional drugs used in the medicinal and curative properties of various plants are employed in herbal supplement botanicals, nutraceuticals and drug. Aim: The authors explored the mineral element content of some herbs, because mineral elements may have significant role in the development and treatment of gastrointestinal diseases, and a close connection between the presence or absence of mineral elements and inflammatory mediators was noted. Methods: Present study deals with the elemental analysis of medicinal plants by Instrumental Neutron activation Analysis and Atomic Absorption Spectroscopy. Medicinal herbals prescribed for skin diseases were purchased from markets and were analyzed by Instrumental Neutron Activation Analysis (INAA) using 252Cf Californium spontaneous fission neutron source (flux * 109 n s-1) and the induced activities were counted by γ-ray spectrometry and Atomic Absorption Spectroscopy (AAS) techniques (Perkin Elmer 3100 Model) available at Department of Chemistry University of Pune, INDIA, was used for the measurement of major, minor and trace elements. Results: 15 elements viz. Al, K, Cl, Na, Mn by INAA and Cu, Co, Pb, Ni, Cr, Ca, Fe, Zn, Hg and Cd by AAS were analyzed from different medicinal plants from India. A critical examination of the data shows that the elements Ca , K, Cl, Al, and Fe are found to be present at major levels in most of the samples while the other elements Na, Mn, Cu, Co, Pb, Ni, Cr, Ca, Zn, Hg and Cd are present in minor or trace levels. Conclusion: The beneficial therapeutic effect of the studied herbs may be related to their mineral element content. The elemental concentration in different medicinal plants is discussed.Keywords: instrumental neutron activation analysis, atomic absorption spectroscopy, medicinal plants, trace elemental analysis, mineral contents
Procedia PDF Downloads 332566 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System
Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii
Abstract:
Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression
Procedia PDF Downloads 162565 Challenging Role of Talent Management, Career Development and Compensation Management toward Employee Retention and Organizational Performance with Mediating Effect of Employee Motivation in Service Sector of Pakistan
Authors: Muhammad Younas, Sidra Sawati, M. Razzaq Athar
Abstract:
Organizational development history reveals that it has ever been a challenge to identify and fathom the role of talent management, career development and compensation management towards employees’ retention and organizational performance. Organizations strive hard to measure the impact of all those factors which affect employee retention and organizational performance. Researchers have worked in great deal in order to know the relationship of independent variables i.e. Talent Management, Career Development and Compensation Management on dependent variables i.e. Employee Retention and Organizational Performance. Employees adorned with latest skills with long lasting loyalty play a significant role towards successful achievement of short term as well as long term goals of the organizations. Retention of valuable and resourceful employees for a longer time is equally essential for meeting the set goals. The organizations which spend reasonable chunk of their resources for taking such measures that help to retain their employees through talent management and satisfactory career development always enjoy a competitive edge over their competitors. Human resource is regarded as one of the most precious and difficult resource to management. It has its own needs and requirement. It becomes an easy prey to monotony when lacks career development. Wants and aspirations of this resource are seldom met completely but can be managed through career development and compensation management. In this era of competition, organizations have to take viable steps to management their resources especially human resource. Top management and Managers keep on working for an amenable solution in order to address the challenges relating career development and compensation management as their ultimate goal is to ensure the organizational performance on optimum level. The current study was conducted to examine the impact of Talent Management, Career Development and Compensation Management towards Employees Retention and Organizational Performance with mediating effect of Employees Motivation in Service Sector of Pakistan. The current study is based on Resource Based View (RBV) and Ability Motivation Opportunity (AMO) theories. It explains that by increasing internal resources we can manage employee talent, career development through compensation management and employee motivation more effectively. It will result in effective execution of HRM practices for employee retention enabling an organization to achieve and sustain competitive advantage through optimal performance. Data collection was made through a structured questionnaire which was based upon adopted instruments after testing reliability and validity. A total 300 employees of 30 firms in service sector of Pakistan were sampled through non-probability sampling technique. Regression analysis revealed that talent management, career development and compensation management have significant positive impact on employee retention and perceived organizational performance. The results further showed that employee motivation have a significant mediating effect on employee retention and organizational performance. The interpretation of the findings and limitations, theoretical and managerial implications are also discussed.Keywords: career development, compensation management, employee retention, organizational performance, talent management
Procedia PDF Downloads 321564 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare
Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl
Abstract:
Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval
Procedia PDF Downloads 204563 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ
Authors: Lalita, Niladri Sarkar, Subhasis Ghosh
Abstract:
Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal
Procedia PDF Downloads 62562 Experimental Verification of Similarity Criteria for Sound Absorption of Perforated Panels
Authors: Aleksandra Majchrzak, Katarzyna Baruch, Monika Sobolewska, Bartlomiej Chojnacki, Adam Pilch
Abstract:
Scaled modeling is very common in the areas of science such as aerodynamics or fluid mechanics, since defining characteristic numbers enables to determine relations between objects under test and their models. In acoustics, scaled modeling is aimed mainly at investigation of room acoustics, sound insulation and sound absorption phenomena. Despite such a range of application, there is no method developed that would enable scaling acoustical perforated panels freely, maintaining their sound absorption coefficient in a desired frequency range. However, conducted theoretical and numerical analyses have proven that it is not physically possible to obtain given sound absorption coefficient in a desired frequency range by directly scaling only all of the physical dimensions of a perforated panel, according to a defined characteristic number. This paper is a continuation of the research mentioned above and presents practical evaluation of theoretical and numerical analyses. The measurements of sound absorption coefficient of perforated panels were performed in order to verify previous analyses and as a result find the relations between full-scale perforated panels and their models which will enable to scale them properly. The measurements were conducted in a one-to-eight model of a reverberation chamber of Technical Acoustics Laboratory, AGH. Obtained results verify theses proposed after theoretical and numerical analyses. Finding the relations between full-scale and modeled perforated panels will allow to produce measurement samples equivalent to the original ones. As a consequence, it will make the process of designing acoustical perforated panels easier and will also lower the costs of prototypes production. Having this knowledge, it will be possible to emulate in a constructed model panels used, or to be used, in a full-scale room more precisely and as a result imitate or predict the acoustics of a modeled space more accurately.Keywords: characteristic numbers, dimensional analysis, model study, scaled modeling, sound absorption coefficient
Procedia PDF Downloads 197561 Effectiveness of Technology Enhanced Learning in Orthodontic Teaching
Authors: Mohammed Shaath
Abstract:
Aims Technological advancements in teaching and learning have made significant improvements over the past decade and have been incorporated in institutions to aid the learner’s experience. This review aims to assess whether Technology Enhanced Learning (TEL) pedagogy is more effective at improving students’ attitude and knowledge retention in orthodontic training than traditional methods. Methodology The searches comprised Systematic Reviews (SRs) related to the comparison of TEL and traditional teaching methods from the following databases: PubMed, SCOPUS, Medline, and Embase. One researcher performed the screening, data extraction, and analysis and assessed the risk of bias and quality using A Measurement Tool to Assess Systematic Reviews 2 (AMSTAR-2). Kirkpatrick’s 4-level evaluation model was used to evaluate the educational values. Results A sum of 34 SRs was identified after the removal of duplications and irrelevant SRs; 4 fit the inclusion criteria. On Level 1, students showed positivity to TEL methods, although acknowledging that the harder the platforms to use, the less favourable. Nonetheless, the students still showed high levels of acceptability. Level 2 showed there is no significant overall advantage of increased knowledge when it comes to TEL methods. One SR showed that certain aspects of study within orthodontics deliver a statistical improvement with TEL. Level 3 was the least reported on. Results showed that if left without time restrictions, TEL methods may be advantageous. Level 4 shows that both methods are equally as effective, but TEL has the potential to overtake traditional methods in the future as a form of active, student-centered approach. Conclusion TEL has a high level of acceptability and potential to improve learning in orthodontics. Current reviews have potential to be improved, but the biggest aspect that needs to be addressed is the primary study, which shows a lower level of evidence and heterogeneity in their results. As it stands, the replacement of traditional methods with TEL cannot be fully supported in an evidence-based manner. The potential of TEL methods has been recognized and is already starting to show some evidence of the ability to be more effective in some aspects of learning to cater for a more technology savvy generation.Keywords: TEL, orthodontic, teaching, traditional
Procedia PDF Downloads 44560 Using Hyperspectral Camera and Deep Learning to Identify the Ripeness of Sugar Apples
Authors: Kuo-Dung Chiou, Yen-Xue Chen, Chia-Ying Chang
Abstract:
This study uses AI technology to establish an expert system and establish a fruit appearance database for pineapples and custard apples. It collects images based on appearance defects and fruit maturity. It uses deep learning to detect the location of the fruit and can detect the appearance of the fruit in real-time. Flaws and maturity. In addition, a hyperspectral camera was used to scan pineapples and custard apples, and the light reflection at different frequency bands was used to find the key frequency band for pectin softening in post-ripe fruits. Conducted a large number of multispectral image collection and data analysis to establish a database of Pineapple Custard Apple and Big Eyed Custard Apple, which includes a high-definition color image database, a hyperspectral database in the 377~1020 nm frequency band, and five frequency band images (450, 500, 670, 720, 800nm) multispectral database, which collects 4896 images and manually labeled ground truth; 26 hyperspectral pineapple custard apple fruits (520 images each); multispectral custard apple 168 fruits (5 images each). Using the color image database to train deep learning Yolo v4's pre-training network architecture and adding the training weights established by the fruit database, real-time detection performance is achieved, and the recognition rate reaches over 97.96%. We also used multispectral to take a large number of continuous shots and calculated the difference and average ratio of the fruit in the 670 and 720nm frequency bands. They all have the same trend. The value increases until maturity, and the value will decrease after maturity. Subsequently, the sub-bands will be added to analyze further the numerical analysis of sugar content and moisture, and the absolute value of maturity and the data curve of maturity will be found.Keywords: hyperspectral image, fruit firmness, deep learning, automatic detection, automatic measurement, intelligent labor saving
Procedia PDF Downloads 5559 Polypyrrole Integrated MnCo2O4 Nanorods Hybrid as Electrode Material for High Performance Supercapacitor
Authors: Santimoy Khilari, Debabrata Pradhan
Abstract:
Ever−increasing energy demand and growing energy crisis along with environmental issues emphasize the research on sustainable energy conversion and storage systems. Recently, supercapacitors or electrochemical capacitors emerge as a promising energy storage technology for future generation. The activity of supercapacitors generally depends on the efficiency of its electrode materials. So, the development of cost−effective efficient electrode materials for supercapacitors is one of the challenges to the scientific community. Transition metal oxides with spinel crystal structure receive much attention for different electrochemical applications in energy storage/conversion devices because of their improved performance as compared to simple oxides. In the present study, we have synthesized polypyrrole (PPy) supported manganese cobaltite nanorods (MnCo2O4 NRs) hybrid electrode material for supercapacitor application. The MnCo2O4 NRs were synthesized by a simple hydrothermal and calcination approach. The MnCo2O4 NRs/PPy hybrid was prepared by in situ impregnation of MnCo2O4 NRs during polymerization of pyrrole. The surface morphology and microstructure of as−synthesized samples was characterized by scanning electron microscopy and transmission electron microscopy, respectively. The crystallographic phase of MnCo2O4 NRs, PPy and hybrid was determined by X-ray diffraction. Electrochemical charge storage activity of MnCo2O4 NRs, PPy and MnCo2O4 NRs/PPy hybrid was evaluated from cyclic voltammetry, chronopotentiometry and electrochemical impedance spectroscopy. Significant improvement of specific capacitance was achieved in MnCo2O4 NRs/PPy hybrid as compared to the individual components. Furthermore, the mechanically mixed MnCo2O4 NRs, and PPy shows lower specific capacitance as compared to MnCo2O4 NRs/PPy hybrid suggesting the importance of in situ hybrid preparation. The stability of as prepared electrode materials was tested by cyclic charge-discharge measurement for 1000 cycles. Maximum 94% capacitance was retained with MnCo2O4 NRs/PPy hybrid electrode. This study suggests that MnCo2O4 NRs/PPy hybrid can be used as a low cost electrode material for charge storage in supercapacitors.Keywords: supercapacitors, nanorods, spinel, MnCo2O4, polypyrrole
Procedia PDF Downloads 340558 The Brain’s Attenuation Coefficient as a Potential Estimator of Temperature Elevation during Intracranial High Intensity Focused Ultrasound Procedures
Authors: Daniel Dahis, Haim Azhari
Abstract:
Noninvasive image-guided intracranial treatments using high intensity focused ultrasound (HIFU) are on the course of translation into clinical applications. They include, among others, tumor ablation, hyperthermia, and blood-brain-barrier (BBB) penetration. Since many of these procedures are associated with local temperature elevation, thermal monitoring is essential. MRI constitutes an imaging method with high spatial resolution and thermal mapping capacity. It is the currently leading modality for temperature guidance, commonly under the name MRgHIFU (magnetic-resonance guided HIFU). Nevertheless, MRI is a very expensive non-portable modality which jeopardizes its accessibility. Ultrasonic thermal monitoring, on the other hand, could provide a modular, cost-effective alternative with higher temporal resolution and accessibility. In order to assess the feasibility of ultrasonic brain thermal monitoring, this study investigated the usage of brain tissue attenuation coefficient (AC) temporal changes as potential estimators of thermal changes. Newton's law of cooling describes a temporal exponential decay behavior for the temperature of a heated object immersed in a relatively cold surrounding. Similarly, in the case of cerebral HIFU treatments, the temperature in the region of interest, i.e., focal zone, is suggested to follow the same law. Thus, it was hypothesized that the AC of the irradiated tissue may follow a temporal exponential behavior during cool down regime. Three ex-vivo bovine brain tissue specimens were inserted into plastic containers along with four thermocouple probes in each sample. The containers were placed inside a specially built ultrasonic tomograph and scanned at room temperature. The corresponding pixel-averaged AC was acquired for each specimen and used as a reference. Subsequently, the containers were placed in a beaker containing hot water and gradually heated to about 45ᵒC. They were then repeatedly rescanned during cool down using ultrasonic through-transmission raster trajectory until reaching about 30ᵒC. From the obtained images, the normalized AC and its temporal derivative as a function of temperature and time were registered. The results have demonstrated high correlation (R² > 0.92) between both the brain AC and its temporal derivative to temperature. This indicates the validity of the hypothesis and the possibility of obtaining brain tissue temperature estimation from the temporal AC thermal changes. It is important to note that each brain yielded different AC values and slopes. This implies that a calibration step is required for each specimen. Thus, for a practical acoustic monitoring of the brain, two steps are suggested. The first step consists of simply measuring the AC at normal body temperature. The second step entails measuring the AC after small temperature elevation. In face of the urging need for a more accessible thermal monitoring technique for brain treatments, the proposed methodology enables a cost-effective high temporal resolution acoustical temperature estimation during HIFU treatments.Keywords: attenuation coefficient, brain, HIFU, image-guidance, temperature
Procedia PDF Downloads 166557 Distance and Coverage: An Assessment of Location-Allocation Models for Fire Stations in Kuwait City, Kuwait
Authors: Saad M. Algharib
Abstract:
The major concern of planners when placing fire stations is finding their optimal locations such that the fire companies can reach fire locations within reasonable response time or distance. Planners are also concerned with the numbers of fire stations that are needed to cover all service areas and the fires, as demands, with standard response time or distance. One of the tools for such analysis is location-allocation models. Location-allocation models enable planners to determine the optimal locations of facilities in an area in order to serve regional demands in the most efficient way. The purpose of this study is to examine the geographic distribution of the existing fire stations in Kuwait City. This study utilized location-allocation models within the Geographic Information System (GIS) environment and a number of statistical functions to assess the current locations of fire stations in Kuwait City. Further, this study investigated how well all service areas are covered and how many and where additional fire stations are needed. Four different location-allocation models were compared to find which models cover more demands than the others, given the same number of fire stations. This study tests many ways to combine variables instead of using one variable at a time when applying these models in order to create a new measurement that influences the optimal locations for locating fire stations. This study also tests how location-allocation models are sensitive to different levels of spatial dependency. The results indicate that there are some districts in Kuwait City that are not covered by the existing fire stations. These uncovered districts are clustered together. This study also identifies where to locate the new fire stations. This study provides users of these models a new variable that can assist them to select the best locations for fire stations. The results include information about how the location-allocation models behave in response to different levels of spatial dependency of demands. The results show that these models perform better with clustered demands. From the additional analysis carried out in this study, it can be concluded that these models applied differently at different spatial patterns.Keywords: geographic information science, GIS, location-allocation models, geography
Procedia PDF Downloads 179556 Body Composition Analysis of University Students by Anthropometry and Bioelectrical Impedance Analysis
Authors: Vinti Davar
Abstract:
Background: Worldwide, at least 2.8 million people die each year as a result of being overweight or obese, and 35.8 million (2.3%) of global DALYs are caused by overweight or obesity. Obesity is acknowledged as one of the burning public health problems reducing life expectancy and quality of life. The body composition analysis of the university population is essential in assessing the nutritional status, as well as the risk of developing diseases associated with abnormal body fat content so as to make nutritional recommendations. Objectives: The main aim was to determine the prevalence of obesity and overweight in University students using Anthropometric analysis and BIA methods Material and Methods: In this cross-sectional study, 283 university students participated. The body composition analysis was undertaken by using mainly: i) Anthropometric Measurement: Height, Weight, BMI, waist circumference, hip circumference and skin fold thickness, ii) Bio-electrical impedance was used for analysis of body fat mass, fat percent and visceral fat which was measured by Tanita SC-330P Professional Body Composition Analyzer. The data so collected were compiled in MS Excel and analyzed for males and females using SPSS 16.Results and Discussion: The mean age of the male (n= 153) studied subjects was 25.37 ±2.39 year and females (n=130) was 22.53 ±2.31. The data of BIA revealed very high mean fat per cent of the female subjects i.e. 30.3±6.5 per cent whereas mean fat per cent of the male subjects was 15.60±6.02 per cent indicating a normal body fat range. The findings showed high visceral fat of both males (12.92±3.02) and females (16.86±4.98). BMI, BF% and WHR were higher among females, and BMI was higher among males. The most evident correlation was verified between BF% and WHR for female students (r=0.902; p<0.001). The correlation of BFM and BF% with thickness of triceps, sub scapular and abdominal skin folds and BMI was significant (P<0.001). Conclusion: The studied data made it obvious that there is a need to initiate lifestyle changing strategies especially for adult females and encourage them to improve their dietary intake to prevent incidence of non communicable diseases due to obesity and high fat percentage.Keywords: anthropometry, bioelectrical impedance, body fat percentage, obesity
Procedia PDF Downloads 381555 Analysis of Radiation-Induced Liver Disease (RILD) and Evaluation of Relationship between Therapeutic Activity and Liver Clearance Rate with Tc-99m-Mebrofenin in Yttrium-90 Microspheres Treatment
Authors: H. Tanyildizi, M. Abuqebitah, I. Cavdar, M. Demir, L. Kabasakal
Abstract:
Aim: Whole liver radiation has the modest benefit in the treatment of unresectable hepatic metastases but the radiation doses must keep in control. Otherwise, RILD complications may arise. In this study, we aimed to calculate amount of maximum permissible activity (MPA) and critical organ absorbed doses with MIRD methodology, to evaluate tumour doses for treatment response and whole liver doses for RILD and to find optimal liver function test additionally. Materials and Methods: This study includes 29 patients who attended our nuclear medicine department suffering from Y-90 microspheres treatment. 10 mCi Tc-99m MAA was applied to the patients for dosimetry via IV. After the injection, whole body SPECT/CT images were taken in one hour. The minimum therapeutic tumour dose is on the point of being 120 Gy1, the amount of activities were calculated with MIRD methodology considering volumetric tumour/liver rate. A sub-working group was created with 11 patients randomly and liver clearance rate with Tc-99m-Mebrofenin was calculated according to Ekman formalism. Results: The volumetric tumour/liver rates were found between 33-66% (Maksimum Tolarable Dose (MTD) 48-52Gy3) for 4 patients, were found less than 33% (MTD 72Gy3) for 25 patients. According to these results the average amount of activity, mean liver dose and mean tumour dose were found 1793.9±1.46 MBq, 32.86±0.19 Gy, and 138.26±0.40 Gy. RILD was not observed in any patient. In sub-working group, the relationship between Bilirubin, Albumin, INR (which show presence of liver disease and its degree), liver clearance with Tc-99m-Mebrofenin and calculated activity amounts were found r=0.49, r=0.27, r=0.43, r=0.57, respectively. Discussions: The minimum tumour dose was found 120 Gy for positive dose-response relation. If volumetric tumour/liver rate was > 66%, dose 30 Gy; if volumetric tumour/liver rate 33-66%, dose escalation 48 Gy; if volumetric tumour/liver rate < 33%, dose 72 Gy. These dose limitations did not create RILD. Clearance measurement with Mebrofenin was concluded that the best method to determine the liver function. Therefore, liver clearance rate with Tc-99m-Mebrofenin should be considered in calculation of yttrium-90 microspheres dosimetry.Keywords: clearance, dosimetry, liver, RILD
Procedia PDF Downloads 440554 Are Oral Health Conditions Associated with Children’s School Performance and School Attendance in the Kingdom of Bahrain - A Life Course Approach
Authors: Seham A. S. Mohamed, Sarah R. Baker, Christopher Deery, Mario V. Vettore
Abstract:
Background: The link between oral health conditions and school performance and attendance remain unclear among Middle Eastern children. The association has been studied extensively in the Western region; however, several concerns have been raised regarding the reliability and validity of measures, low quality of studies, inadequate inclusion of potential confounders, and the lack of a conceptual framework. These limitations have meant that, to date, there has been no detailed understanding of the association or of the key social, clinical, behavioural and parental factors which may impact the association. Aim: To examine the association between oral health conditions and children’s school performance and attendance at Grade 2 in Muharraq city in the Kingdom of Bahrain using Heilmann et al.’s (2015) life course framework for oral health. Objectives: To (1) describe the prevalence of oral health conditions among 7-8 years old schoolchildren in the city of Muharraq; (2) analyse the social, biological, behavioural, and parental pathways that link early and current life exposures with children’s current oral health status; (3) examine the association between oral health conditions and school performance and attendance among schoolchildren; (4) explore the early and current life course social, biological, behavioural and parental factors associated with children’s school outcomes. Design: A time-ordered-cross-sectional study was conducted with 466 schoolchildren aged 7-8 years and their parents from Muharraq city in KoB. Data were collected through parents’ self-administered questionnaires, children’s face-face interviews, and dental clinical examinations. Outcome variables, including school performance and school attendance data, were obtained from the parents and school records. The data were analysed using structural equation modelling (SEM). Results: Dental caries, the consequence of dental caries (PUFA/pufa), and enamel developmental defects (EDD) prevalence were 93.4%, 25.7%, and 17.2%, respectively. The findings from the SEM showed that children born in families with high SES were less likely to suffer from dentine dental caries (β= -0.248) and more likely to earn high school performance (β= 0.136) at 7-8 years of age in Muharraq. From the current life course of children, the dental plaque was associated significantly and directly with enamel caries (β= 0.094), dentine caries (β= 0.364), treated teeth (filled or extracted because of dental caries) (β= 0.121), and indirectly associated with dental pain (β= 0.057). Further, dentine dental caries was associated significantly and directly with low school performance (β= -0.155). At the same time, the dental plaque was indirectly associated with low school performance via dental caries (β = −0.044). Conversely, treated teeth were associated directly with high school performance (β= 0.100). Notably, none of the OHCs, biological, SES, behavioural, or parental conditions was related to school attendance in children. Conclusion: The life course approach was adequate to examine the role of OHCs on children’s school performance and attendance. Birth and current (7-8-year-olds) social factors were significant predictors of poor OH and poor school performance.Keywords: dental caries, life course, Bahrain, school outcomes
Procedia PDF Downloads 111553 Exposure to Ionizing Radiation Resulting from the Chernobyl Fallout and Childhood Cardiac Arrhythmia: A Population Based Study
Authors: Geraldine Landon, Enora Clero, Jean-Rene Jourdain
Abstract:
In 2005, the Institut de Radioprotection et de Sûreté Nucléaire (IRSN, France) launched a research program named EPICE (acronym for 'Evaluation of Pathologies potentially Induced by CaEsium') to collect scientific information on non-cancer effects possibly induced by chronic exposures to low doses of ionizing radiation with the view of addressing a question raised by several French NGOs related to health consequences of the Chernobyl nuclear accident in children. The implementation of the program was preceded by a pilot phase to ensure that the project would be feasible and determine the conditions for implementing an epidemiological study on a population of several thousand children. The EPICE program focused on childhood cardiac arrhythmias started in May 2009 for 4 years, in partnership with the Russian Bryansk Diagnostic Center. The purpose of this cross-sectional study was to determine the prevalence of cardiac arrhythmias in the Bryansk oblast (depending on the contamination of the territory and the caesium-137 whole-body burden) and to assess whether caesium-137 was or not a factor associated with the onset of cardiac arrhythmias. To address these questions, a study bringing together 18 152 children aged 2 to 18 years was initiated; each child received three medical examinations (ECG, echocardiography, and caesium-137 whole-body activity measurement) and some of them were given with a 24-hour Holter monitoring and blood tests. The findings of the study, currently submitted to an international journal justifying that no results can be given at this step, allow us to answer clearly to the issue of radiation-induced childhood arrhythmia, a subject that has been debated for many years. Our results will be certainly helpful for health professionals responsible for the monitoring of population exposed to the releases from the Fukushima Dai-ichi nuclear power plant and also useful for future comparative study in children exposed to ionizing radiation in other contexts, such as cancer radiation therapies.Keywords: Caesium-137, cardiac arrhythmia, Chernobyl, children
Procedia PDF Downloads 246552 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life
Authors: Sandra Young
Abstract:
The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics
Procedia PDF Downloads 138551 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging
Procedia PDF Downloads 159550 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis
Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab
Abstract:
Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.Keywords: deep neural network, foot disorder, plantar pressure, support vector machine
Procedia PDF Downloads 360549 Comparative Settlement Analysis on the under of Embankment with Empirical Formulas and Settlement Plate Measurement for Reducing Building Crack around of Embankments
Authors: Safitri Nur Wulandari, M. Ivan Adi Perdana, Prathisto L. Panuntun Unggul, R. Dary Wira Mahadika
Abstract:
In road construction on the soft soil, we need a soil improvement method to improve the soil bearing capacity of the land base so that the soil can withstand the traffic loads. Most of the land in Indonesia has a soft soil, where soft soil is a type of clay that has the consistency of very soft to medium stiff, undrained shear strength, Cu <0:25 kg/cm2, or the estimated value of NSPT <5 blows/ft. This study focuses on the analysis of the effect on preloading load (embarkment) to the amount of settlement ratio on the under of embarkment that will impact on the building cracks around of embarkment. The method used in this research is a superposition method for embarkment distribution on 27 locations with undisturbed soil samples at some borehole point in Java and Kalimantan, Indonesia. Then correlating the results of settlement plate monitoring on the field with Asaoka method. The results of settlement plate monitoring taken from an embarkment of Ahmad Yani airport in Semarang on 32 points. Where the value of Cc (index compressible) soil data based on some laboratory test results, while the value of Cc is not tested obtained from empirical formula Ardhana and Mochtar, 1999. From this research, the results of the field monitoring showed almost the same results with an empirical formulation with the standard deviation of 4% where the formulation of the empirical results of this analysis obtained by linear formula. Value empirical linear formula is to determine the effect of compression heap area as high as 4,25 m is 3,1209x + y = 0.0026 for the slope of the embankment 1: 8 for the same analysis with an initial height of embankment on the field. Provided that at the edge of the embankment settlement worth is not equal to 0 but at a quarter of embankment has a settlement ratio average 0.951 and at the edge of embankment has a settlement ratio 0,049. The influence areas around of embankment are approximately 1 meter for slope 1:8 and 7 meters for slope 1:2. So, it can cause the building cracks, to build in sustainable development.Keywords: building cracks, influence area, settlement plate, soft soil, empirical formula, embankment
Procedia PDF Downloads 346548 Establishment and Aging Process Analysis in Dermal Fibroblast Cell Culture of Green Turtle (Chelonia mydas)
Authors: Yemima Dani Riani, Anggraini Barlian
Abstract:
Green turtle (Chelonia mydas) is one of well known long-lived turtle. Its age can reach 100 years old. Senescence in green turtle is an interesting process to study because until now no clear explanation has been established about senescence at cellular or molecular level in this species. Since 1999, green turtle announced as an endangered species. Hence, establishment of fibroblast skin cell culture of green turtle may be material for future study of senescence. One common marker used for detecting senescence is telomere shortening. Reduced telomerase activity, the reverse transcriptase enzyme which adds TTAGGG DNA sequence to telomere end, may also cause senescence. The purpose of this research are establish and identify green turtle fibroblast skin cell culture and also compare telomere length and telomerase activity from passage 5 and 14. Primary cell culture made with primary explant method then cultured in Leibovitz-15 (Sigma) supplemented by 10% Fetal Bovine Serum (Sigma) and 100 U/mL Penicillin/Streptomycin (Sigma) at 30 ± 1oC. Cells identified with Rabbit Anti-Vimentin Polyclonal Antibody (Abcam) and Goat Polyclonal Antibody (Abcam) using confocal microscope (Zeiss LSM 170). Telomere length obtained using TeloTAGGG Telomere Length Assay (Roche) while telomerase activity obtained using TeloTAGGG Telomerase PCR ElisaPlus (Roche). Primary cell culture from green turtle skin had fibroblastic morphology and immunocytochemistry test with vimentin antibody proved the culture was fibroblast cell. Measurement of telomere length and telomerase activity showed that telomere length and telomerase activity of passage 14 was greater than passage 5. However, based on morphology, green turtle fibroblast skin cell culture showed senescent morphology. Based on the analysis of telomere length and telomerase activity, suspected fibroblast skin cell culture of green turtles is not undergo aging through telomere shortening.Keywords: cell culture, chelonia mydas, telomerase, telomere, senescence
Procedia PDF Downloads 426547 Voluntary Water Intake of Flavored Water in Euhydrated Horses
Authors: Brianna M. Soule, Jesslyn A. Bryk-Lucy, Linda M. Ritchie
Abstract:
Colic, defined as abdominal pain in the horse, has several known predisposing factors. Decreased water intake has been shown to predispose equines to impaction colic. The objective of this study was to determine if offering flavored water (sweet feed or banana extract) would increase voluntary water intake in horses to serve as an assessable, noninvasive method for farm managers, veterinarians, or owners to decrease the risk of impaction colic. An a priori power analysis, which was conducted using G*Power version 3.1.9.7, indicated that the minimum sample size required to achieve 80% power for detecting a large effect at a significance level of α = .05 was 19 horses for a one-way repeated measures ANOVA with three treatment levels and assuming a non-sphericity correction of ε=0.5. After a three-day control period, 21 horses were randomly divided into two sequences and offered either banana or sweet feed flavored water. Horses always had a bucket of unflavored water available. A repeated measure study design was used to measure water consumption of each horse over a 62-hour period. A one-way repeated measures ANOVA was conducted to determine whether there were statistically significant differences among the means for the three-day average water intake (ml/kg). Although not statistically significant (F(2, 38) = 1.28, p = .290, partial η2 = .063), the three-day average water intake was largest for banana flavored water (M = 53.51, SD = 9.25 ml/kg), followed by sweet feed (M = 52.93, SD = 11.99 ml/kg), and, finally, unflavored water (M = 50.40, SD = 10.82 ml/kg). Paired-samples t-tests were used to determine whether there was a statistically significant difference between the three-day average water intake (ml/kg) for flavored versus unflavored water. The average unflavored water intake (M = 29.3 ml/kg, SD = 8.9) over the measurement period was greater than the banana flavored water (M = 27.7 ml/kg, SD = 9.8), but the average consumption of the sweet feed flavored water (M = 30.4 ml/kg, SD = 14.6) was greater than unflavored water (M = 24.3 ml/kg, SD = 11.4). None of these differences in average intake were statistically significant (p > .244). Future research is warranted to determine if other flavors significantly increase voluntary water intake in horses.Keywords: colic, equine, equine science, water intake, flavored water, horses, equine management, equine health, horse health, horse health care management, colic prevention
Procedia PDF Downloads 150546 Influence of Ammonia Emissions on Aerosol Formation in Northern and Central Europe
Authors: A. Aulinger, A. M. Backes, J. Bieser, V. Matthias, M. Quante
Abstract:
High concentrations of particles pose a threat to human health. Thus, legal maximum concentrations of PM10 and PM2.5 in ambient air have been steadily decreased over the years. In central Europe, the inorganic species ammonium sulphate and ammonium nitrate make up a large fraction of fine particles. Many studies investigate the influence of emission reductions of sulfur- and nitrogen oxides on aerosol concentration. Here, we focus on the influence of ammonia (NH3) emissions. While emissions of sulphate and nitrogen oxides are quite well known, ammonia emissions are subject to high uncertainty. This is due to the uncertainty of location, amount, time of fertilizer application in agriculture, and the storage and treatment of manure from animal husbandry. For this study, we implemented a crop growth model into the SMOKE emission model. Depending on temperature, local legislation, and crop type individual temporal profiles for fertilizer and manure application are calculated for each model grid cell. Additionally, the diffusion from soils and plants and the direct release from open and closed barns are determined. The emission data was used as input for the Community Multiscale Air Quality (CMAQ) model. Comparisons to observations from the EMEP measurement network indicate that the new ammonia emission module leads to a better agreement of model and observation (for both ammonia and ammonium). Finally, the ammonia emission model was used to create emission scenarios. This includes emissions based on future European legislation, as well as a dynamic evaluation of the influence of different agricultural sectors on particle formation. It was found that a reduction of ammonia emissions by 50% lead to a 24% reduction of total PM2.5 concentrations during winter time in the model domain. The observed reduction was mainly driven by reduced formation of ammonium nitrate. Moreover, emission reductions during winter had a larger impact than during the rest of the year.Keywords: ammonia, ammonia abatement strategies, ctm, seasonal impact, secondary aerosol formation
Procedia PDF Downloads 352545 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products
Authors: Maciej Jedrzejczyk, Karolina Marzantowicz
Abstract:
Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids
Procedia PDF Downloads 303544 A Valid Professional Development Framework For Supporting Science Teachers In Relation To Inquiry-Based Curriculum Units
Authors: Fru Vitalis Akuma, Jenna Koenen
Abstract:
The science education community is increasingly calling for learning experiences that mirror the work of scientists. Although inquiry-based science education is aligned with these calls, the implementation of this strategy is a complex and daunting task for many teachers. Thus, policymakers and researchers have noted the need for continued teacher Professional Development (PD) in the enactment of inquiry-based science education, coupled with effective ways of reaching the goals of teacher PD. This is a complex problem for which educational design research is suitable. The purpose at this stage of our design research is to develop a generic PD framework that is valid as the blueprint of a PD program for supporting science teachers in relation to inquiry-based curriculum units. The seven components of the framework are the goal, learning theory, strategy, phases, support, motivation, and an instructional model. Based on a systematic review of the literature on effective (science) teacher PD, coupled with developer screening, we have generated a design principle per component of the PD framework. For example, as per the associated design principle, the goal of the framework is to provide science teachers with experiences in authentic inquiry, coupled with enhancing their competencies linked to the adoption, customization and design; then the classroom implementation and the revision of inquiry-based curriculum units. The seven design principles have allowed us to synthesize the PD framework, which, coupled with the design principles, are the preliminary outcomes of the current research. We are in the process of evaluating the content and construct validity of the framework, based on nine one-on-one interviews with experts in inquiry-based classroom and teacher learning. To this end, we have developed an interview protocol with the input of eight such experts in South Africa and Germany. Using the protocol, the expert appraisal of the PD framework will involve three experts from Germany, South Africa, and Cameroon, respectively. These countries, where we originate and/or work, provide a variety of inquiry-based science education contexts, making the countries suitable in the evaluation of the generic PD framework. Based on the evaluation, we will revise the framework and its seven design principles to arrive at the final outcomes of the current research. While the final content and construct a valid version of the framework will serve as an example of the needed ways through which effective inquiry-based science teacher PD may be achieved, the final design principles will be useful to researchers when transforming the framework for use in any specific educational context. For example, in our further research, we will transform the framework to one that is practical and effective in supporting inquiry-based practical work in resource-constrained physical sciences classrooms in South Africa. Researchers in other educational contexts may similarly consider the final framework and design principles in their work. Thus, our final outcomes will inform practice and research around the support of teachers to increase the incorporation of learning experiences that mirror the work of scientists in a worldwide manner.Keywords: design principles, educational design research, evaluation, inquiry-based science education, professional development framework
Procedia PDF Downloads 152543 Integrative-Cyclical Approach to the Study of Quality Control of Resource Saving by the Use of Innovation Factors
Authors: Anatoliy A. Alabugin, Nikolay K. Topuzov, Sergei V. Aliukov
Abstract:
It is well known, that while we do a quantitative evaluation of the quality control of some economic processes (in particular, resource saving) with help innovation factors, there are three groups of problems: high uncertainty of indicators of the quality management, their considerable ambiguity, and high costs to provide a large-scale research. These problems are defined by the use of contradictory objectives of enhancing of the quality control in accordance with innovation factors and preservation of economic stability of the enterprise. The most acutely, such factors are felt in the countries lagging behind developed economies of the world according to criteria of innovativeness and effectiveness of management of the resource saving. In our opinion, the following two methods for reconciling of the above-mentioned objectives and reducing of conflictness of the problems are to solve this task most effectively: 1) the use of paradigms and concepts of evolutionary improvement of quality of resource-saving management in the cycle "from the project of an innovative product (technology) - to its commercialization and update parameters of customer value"; 2) the application of the so-called integrative-cyclical approach which consistent with complexity and type of the concept, to studies allowing to get quantitative assessment of the stages of achieving of the consistency of these objectives (from baseline of imbalance, their compromise to achievement of positive synergies). For implementation, the following mathematical tools are included in the integrative-cyclical approach: index-factor analysis (to identify the most relevant factors); regression analysis of relationship between the quality control and the factors; the use of results of the analysis in the model of fuzzy sets (to adjust the feature space); method of non-parametric statistics (for a decision on the completion or repetition of the cycle in the approach in depending on the focus and the closeness of the connection of indicator ranks of disbalance of purposes). The repetition is performed after partial substitution of technical and technological factors ("hard") by management factors ("soft") in accordance with our proposed methodology. Testing of the proposed approach has shown that in comparison with the world practice there are opportunities to improve the quality of resource-saving management using innovation factors. We believe that the implementation of this promising research, to provide consistent management decisions for reducing the severity of the above-mentioned contradictions and increasing the validity of the choice of resource-development strategies in terms of parameters of quality management and sustainability of enterprise, is perspective. Our existing experience in the field of quality resource-saving management and the achieved level of scientific competence of the authors allow us to hope that the use of the integrative-cyclical approach to the study and evaluation of the resulting and factor indicators will help raise the level of resource-saving characteristics up to the value existing in the developed economies of post-industrial type.Keywords: integrative-cyclical approach, quality control, evaluation, innovation factors. economic sustainability, innovation cycle of management, disbalance of goals of development
Procedia PDF Downloads 247542 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 331541 Retrospective Assessment of the Safety and Efficacy of Percutaneous Microwave Ablation in the Management of Hepatic Lesions
Authors: Suang K. Lau, Ismail Goolam, Rafid Al-Asady
Abstract:
Background: The majority of patients with hepatocellular carcinoma (HCC) are not suitable for curative treatment, in the form of surgical resection or transplantation, due to tumour extent and underlying liver dysfunction. In these non-resectable cases, a variety of non-surgical therapies are available, including microwave ablation (MWA), which has shown increasing popularity due to its low morbidity, low reported complication rate, and the ability to perform multiple ablations simultaneously. Objective: The aim of this study was to evaluate the validity of MWA as a viable treatment option in the management of HCC and hepatic metastatic disease, by assessing its efficacy and complication rate at a tertiary hospital situated in Westmead (Australia). Methods: A retrospective observational study was performed evaluating patients that underwent MWA between 1/1/2017–31/12/2018 at Westmead Hospital, NSW, Australia. Outcome measures, including residual disease, recurrence rates, as well as major and minor complication rates, were retrospectively analysed over a 12-months period following MWA treatment. Excluded patients included those whose lesions were treated on the basis of residual or recurrent disease from previous treatment, which occurred prior to the study window (11 patients) and those who were lost to follow up (2 patients). Results: Following treatment of 106 new hepatic lesions, the complete response rate (CR) was 86% (91/106) at 12 months follow up. 10 patients had the residual disease at post-treatment follow up imaging, corresponding to an incomplete response (ICR) rate of 9.4% (10/106). The local recurrence rate (LRR) was 4.6% (5/106) with follow-up period up to 12 months. The minor complication rate was 9.4% (10/106) including asymptomatic pneumothorax (n=2), asymptomatic pleural effusions (n=2), right lower lobe pneumonia (n=3), pain requiring admission (n=1), hypotension (n=1), cellulitis (n=1) and intraparenchymal hematoma (n=1). There was 1 major complication reported, with pleuro-peritoneal fistula causing recurrent large pleural effusion necessitating repeated thoracocentesis (n=1). There was no statistically significant association between tumour size, location or ablation factors, and risk of recurrence or residual disease. A subset analysis identified 6 segment VIII lesions, which were treated via a trans-pleural approach. This cohort demonstrated an overall complication rate of 33% (2/6), including 1 minor complication of asymptomatic pneumothorax and 1 major complication of pleuro-peritoneal fistula. Conclusions: Microwave ablation therapy is an effective and safe treatment option in cases of non-resectable hepatocellular carcinoma and liver metastases, with good local tumour control and low complication rates. A trans-pleural approach for high segment VIII lesions is associated with a higher complication rate and warrants greater caution.Keywords: hepatocellular carcinoma, liver metastases, microwave ablation, trans-pleural approach
Procedia PDF Downloads 139