Search results for: appliances efficiency improvement
356 Training for Search and Rescue Teams: Online Training for SAR Teams to Locate Lost Persons with Dementia Using Drones
Authors: Dalia Hanna, Alexander Ferworn
Abstract:
This research provides detailed proposed training modules for the public safety teams and, specifically, SAR teams responsible for search and rescue operations related to finding lost persons with dementia. Finding a lost person alive is the goal of this training. Time matters if a lost person is to be found alive. Finding lost people living with dementia is quite challenging, as they are unaware they are lost and will not seek help. Even a small contribution to SAR operations could contribute to saving a life. SAR operations will always require expert professional and human volunteers. However, we can reduce their time, save lives, and reduce costs by providing practical training that is based on real-life scenarios. The content for the proposed training is based on the research work done by the researcher in this area. This research has demonstrated that, based on utilizing drones, the algorithmic approach could support a successful search outcome. Understanding the behavior of the lost person, learning where they may be found, predicting their survivability, and automating the search are all contributions of this work, founded in theory and demonstrated in practice. In crisis management, human behavior constitutes a vital aspect in responding to the crisis; the speed and efficiency of the response often get affected by the difficulty of the context of the operation. Therefore, training in this area plays a significant role in preparing the crisis manager to manage the emotional aspects that lead to decision-making in these critical situations. Since it is crucial to gain high-level strategic choices and the ability to apply crisis management procedures, simulation exercises become central in training crisis managers to gain the needed skills to respond critically to these events. The training will enhance the responders’ ability to make decisions and anticipate possible consequences of their actions through flexible and revolutionary reasoning in responding to the crisis efficiently and quickly. As adult learners, search and rescue teams will be approaching training and learning by taking responsibility of the learning process, appreciate flexible learning and as contributors to the teaching and learning happening during that training. These are all characteristics of adult learning theories. The learner self-reflects, gathers information, collaborates with others and is self-directed. One of the learning strategies associated with adult learning is effective elaboration. It helps learners to remember information in the long term and use it in situations where it might be appropriate. It is also a strategy that can be taught easily and used with learners of different ages. Designers must design reflective activities to improve the student’s intrapersonal awareness.Keywords: training, OER, dementia, drones, search and rescue, adult learning, UDL, instructional design
Procedia PDF Downloads 108355 Thermo-Economic Evaluation of Sustainable Biogas Upgrading via Solid-Oxide Electrolysis
Authors: Ligang Wang, Theodoros Damartzis, Stefan Diethelm, Jan Van Herle, François Marechal
Abstract:
Biogas production from anaerobic digestion of organic sludge from wastewater treatment as well as various urban and agricultural organic wastes is of great significance to achieve a sustainable society. Two upgrading approaches for cleaned biogas can be considered: (1) direct H₂ injection for catalytic CO₂ methanation and (2) CO₂ separation from biogas. The first approach usually employs electrolysis technologies to generate hydrogen and increases the biogas production rate; while the second one usually applies commercially-available highly-selective membrane technologies to efficiently extract CO₂ from the biogas with the latter being then sent afterward for compression and storage for further use. A straightforward way of utilizing the captured CO₂ is on-site catalytic CO₂ methanation. From the perspective of system complexity, the second approach may be questioned, since it introduces an additional expensive membrane component for producing the same amount of methane. However, given the circumstance that the sustainability of the produced biogas should be retained after biogas upgrading, renewable electricity should be supplied to drive the electrolyzer. Therefore, considering the intermittent nature and seasonal variation of renewable electricity supply, the second approach offers high operational flexibility. This indicates that these two approaches should be compared based on the availability and scale of the local renewable power supply and not only the technical systems themselves. Solid-oxide electrolysis generally offers high overall system efficiency, and more importantly, it can achieve simultaneous electrolysis of CO₂ and H₂O (namely, co-electrolysis), which may bring significant benefits for the case of CO₂ separation from the produced biogas. When taking co-electrolysis into account, two additional upgrading approaches can be proposed: (1) direct steam injection into the biogas with the mixture going through the SOE, and (2) CO₂ separation from biogas which can be used later for co-electrolysis. The case study of integrating SOE to a wastewater treatment plant is investigated with wind power as the renewable power. The dynamic production of biogas is provided on an hourly basis with the corresponding oxygen and heating requirements. All four approaches mentioned above are investigated and compared thermo-economically: (a) steam-electrolysis with grid power, as the base case for steam electrolysis, (b) CO₂ separation and co-electrolysis with grid power, as the base case for co-electrolysis, (c) steam-electrolysis and CO₂ separation (and storage) with wind power, and (d) co-electrolysis and CO₂ separation (and storage) with wind power. The influence of the scale of wind power supply is investigated by a sensitivity analysis. The results derived provide general understanding on the economic competitiveness of SOE for sustainable biogas upgrading, thus assisting the decision making for biogas production sites. The research leading to the presented work is funded by European Union’s Horizon 2020 under grant agreements n° 699892 (ECo, topic H2020-JTI-FCH-2015-1) and SCCER BIOSWEET.Keywords: biogas upgrading, solid-oxide electrolyzer, co-electrolysis, CO₂ utilization, energy storage
Procedia PDF Downloads 155354 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 114353 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.Keywords: dashboard, decision support, emergency medical services, key performance indicators
Procedia PDF Downloads 112352 How Can Food Retailing Benefit from Neuromarketing Research: The Influence of Traditional and Innovative Tools of In-Store Communication on Consumer Reactions
Authors: Jakub Berčík, Elena Horská, Ľudmila Nagyová
Abstract:
Nowadays, the point of sale remains one of the few channels of communication which is not oversaturated yet and has great potential for the future. The fact that purchasing decisions are significantly affected by emotions, while up to 75 % of them are implemented at the point of sale, only demonstrates its importance. The share of impulsive purchases is about 60-75 %, depending on the particular product category. Nevertheless, habits predetermine the content of the shopping cart above all and hence in this regard the role of in-store communication is to disrupt the routine and compel the customer to try something new. This is the reason why it is essential to know how to work with this relatively young branch of marketing communication as efficiently as possible. New global trend in this discipline is evaluating the effectiveness of particular tools in the in-store communication. To increase the efficiency it is necessary to become familiar with the factors affecting the customer both consciously and unconsciously, and that is a task for neuromarketing and sensory marketing. It is generally known that the customer remembers the negative experience much longer and more intensely than the positive ones, therefore it is essential for marketers to avoid this negative experience. The final effect of POP (Point of Purchase) or POS (Point of Sale) tools is conditional not only on their quality and design, but also on the location at the point of sale which contributes to the overall positive atmosphere in the store. Therefore, in-store advertising is increasingly in the center of attention and companies are willing to spend even a third of their marketing communication budget on it. The paper deals with a comprehensive, interdisciplinary research of the impact of traditional as well as innovative tools of in-store communication on the attention and emotional state (valence and arousal) of consumers on the food market. The research integrates measurements with eye camera (Eye tracker) and electroencephalograph (EEG) in real grocery stores as well as in laboratory conditions with the purpose of recognizing attention and emotional response among respondents under the influence of selected tools of in-store communication. The object of the research includes traditional (e.g. wobblers, stoppers, floor graphics) and innovative (e.g. displays, wobblers with LED elements, interactive floor graphics) tools of in-store communication in the fresh unpackaged food segment. By using a mobile 16-channel electroencephalograph (EEG equipment) from the company EPOC, a mobile eye camera (Eye tracker) from the company Tobii and a stationary eye camera (Eye tracker) from the company Gazepoint, we observe the attention and emotional state (valence and arousal) to reveal true consumer preferences using traditional and new unusual communication tools at the point of sale of the selected foodstuffs. The paper concludes with suggesting possibilities for rational, effective and energy-efficient combination of in-store communication tools, by which the retailer can accomplish not only captivating and attractive presentation of displayed goods, but ultimately also an increase in retail sales of the store.Keywords: electroencephalograph (EEG), emotion, eye tracker, in-store communication
Procedia PDF Downloads 391351 Next-Generation Lunar and Martian Laser Retro-Reflectors
Authors: Simone Dell'Agnello
Abstract:
There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy
Procedia PDF Downloads 270350 Robotic Process Automation in Accounting and Finance Processes: An Impact Assessment of Benefits
Authors: Rafał Szmajser, Katarzyna Świetla, Mariusz Andrzejewski
Abstract:
Robotic process automation (RPA) is a technology of repeatable business processes performed using computer programs, robots that simulate the work of a human being. This approach assumes replacing an existing employee with the use of dedicated software (software robots) to support activities, primarily repeated and uncomplicated, characterized by a low number of exceptions. RPA application is widespread in modern business services, particularly in the areas of Finance, Accounting and Human Resources Management. By utilizing this technology, the effectiveness of operations increases while reducing workload, minimizing possible errors in the process, and as a result, bringing measurable decrease in the cost of providing services. Regardless of how the use of modern information technology is assessed, there are also some doubts as to whether we should replace human activities in the implementation of the automation in business processes. After the initial awe for the new technological concept, a reflection arises: to what extent does the implementation of RPA increase the efficiency of operations or is there a Business Case for implementing it? If the business case is beneficial, in which business processes is the greatest potential for RPA? A closer look at these issues was provided by in this research during which the respondents’ view of the perceived advantages resulting from the use of robotization and automation in financial and accounting processes was verified. As a result of an online survey addressed to over 500 respondents from international companies, 162 complete answers were returned from the most important types of organizations in the modern business services industry, i.e. Business or IT Process Outsourcing (BPO/ITO), Shared Service Centers (SSC), Consulting/Advisory and their customers. Answers were provided by representatives of the positions in their organizations: Members of the Board, Directors, Managers and Experts/Specialists. The structure of the survey allowed the respondents to supplement the survey with additional comments and observations. The results formed the basis for the creation of a business case calculating tangible benefits associated with the implementation of automation in the selected financial processes. The results of the statistical analyses carried out with regard to revenue growth confirmed the correctness of the hypothesis that there is a correlation between job position and the perception of the impact of RPA implementation on individual benefits. Second hypothesis (H2) that: There is a relationship between the kind of company in the business services industry and the reception of the impact of RPA on individual benefits was thus not confirmed. Based results of survey authors performed simulation of business case for implementation of RPA in selected Finance and Accounting Processes. Calculated payback period was diametrically different ranging from 2 months for the Account Payables process with 75% savings and in the extreme case for the process Taxes implementation and maintenance costs exceed the savings resulting from the use of the robot.Keywords: automation, outsourcing, business process automation, process automation, robotic process automation, RPA, RPA business case, RPA benefits
Procedia PDF Downloads 137349 Structural Analysis of a Composite Wind Turbine Blade
Abstract:
The design of an optimised horizontal axis 5-meter-long wind turbine rotor blade in according with IEC 61400-2 standard is a research and development project in order to fulfil the requirements of high efficiency of torque from wind production and to optimise the structural components to the lightest and strongest way possible. For this purpose, a research study is presented here by focusing on the structural characteristics of a composite wind turbine blade via finite element modelling and analysis tools. In this work, first, the required data regarding the general geometrical parts are gathered. Then, the airfoil geometries are created at various sections along the span of the blade by using CATIA software to obtain the two surfaces, namely; the suction and the pressure side of the blade in which there is a hat shaped fibre reinforced plastic spar beam, so-called chassis starting at 0.5m from the root of the blade and extends up to 4 m and filled with a foam core. The root part connecting the blade to the main rotor differential metallic hub having twelve hollow threaded studs is then modelled. The materials are assigned as two different types of glass fabrics, polymeric foam core material and the steel-balsa wood combination for the root connection parts. The glass fabrics are applied using hand wet lay-up lamination with epoxy resin as METYX L600E10C-0, is the unidirectional continuous fibres and METYX XL800E10F having a tri-axial architecture with fibres in the 0,+45,-45 degree orientations in a ratio of 2:1:1. Divinycell H45 is used as the polymeric foam. The finite element modelling of the blade is performed via MSC PATRAN software with various meshes created on each structural part considering shell type for all surface geometries, and lumped mass were added to simulate extra adhesive locations. For the static analysis, the boundary conditions are assigned as fixed at the root through aforementioned bolts, where for dynamic analysis both fixed-free and free-free boundary conditions are made. By also taking the mesh independency into account, MSC NASTRAN is used as a solver for both analyses. The static analysis aims the tip deflection of the blade under its own weight and the dynamic analysis comprises normal mode dynamic analysis performed in order to obtain the natural frequencies and corresponding mode shapes focusing the first five in and out-of-plane bending and the torsional modes of the blade. The analyses results of this study are then used as a benchmark prior to modal testing, where the experiments over the produced wind turbine rotor blade has approved the analytical calculations.Keywords: dynamic analysis, fiber reinforced composites, horizontal axis wind turbine blade, hand-wet layup, modal testing
Procedia PDF Downloads 425348 Effects of Environmental and Genetic Factors on Growth Performance, Fertility Traits and Milk Yield/Composition in Saanen Goats
Authors: Deniz Dincel, Sena Ardicli, Hale Samli, Mustafa Ogan, Faruk Balci
Abstract:
The aim of the study was to determine the effects of some environmental and genetic factors on growth, fertility traits, milk yield and composition in Saanen goats. For this purpose, the total of 173 Saanen goats and kids were investigated for growth, fertility and milk traits in Marmara Region of Turkey. Fertility parameters (n=70) were evaluated during two years. Milk samples were collected during the lactation and the milk yield/components (n=59) of each goat were calculated. In terms of CSN3 and AGPAT6 gene; the genotypes were defined by PCR-RFLP. Saanen kids (n=86-112) were measured from birth to 6 months of life. The birth, weaning, 60ᵗʰ, 90ᵗʰ, 120ᵗʰ and 180tᵗʰ days of average live weights were calculated. The effects of maternal age on pregnancy rate (p < 0.05), birth rate (p < 0.05), infertility rate (p < 0.05), single born kidding (p < 0.001), twinning rate (p < 0.05), triplet rate (p < 0.05), survival rate of kids until weaning (p < 0.05), number of kids per parturition (p < 0.01) and number of kids per mating (p < 0.01) were found significant. The impacts of year on birth rate (p < 0.05), abortion rate (p < 0.001), single born kidding (p < 0.01), survival rate of kids until weaning (p < 0.01), number of kids per mating (p < 0.01) were found significant for fertility traits. The impacts of lactation length on all milk yield parameters (lactation milk, protein, fat, totally solid, solid not fat, casein and lactose yield) (p < 0.001) were found significant. The effects of age on all milk yield parameters (lactation milk, protein, fat, total solid, solid not fat, casein and lactose yield) (p < 0.001), protein rate (p < 0.05), fat rate (p < 0.05), total solid rate (p < 0.01), solid not fat rate (p < 0.05), casein rate (p < 0.05) and lactation length (p < 0.01), were found significant too. However, the effect of AGPAT6 gene on milk yield and composition was not found significant in Saanen goats. The herd was found monomorphic (FF) for CSN3 gene. The effects of sex on live weights until 90ᵗʰ days of life (birth, weaning and 60ᵗʰ day of average weight) were found significant statistically (p < 0.001). The maternal age affected only birth weight (p < 0,001). The effects month at birth on all of the investigated day [the birth, 120ᵗʰ, 180ᵗʰ days (p < 0.05); the weaning, 60ᵗʰ, 90ᵗʰ days (p < 0,001)] were found significant. The birth type was found significant on the birth (p < 0,001), weaning (p < 0,01), 60ᵗʰ (p < 0,01) and 90ᵗʰ (p < 0,01) days of average live weights. As a result, screening the other regions of CSN3, AGPAT6 gene and also investigation the phenotypic association of them should be useful to clarify the efficiency of target genes. Environmental factors such as maternal age, year, sex and birth type were found significant on some growth, fertility and milk traits in Saanen goats. So consideration of these factors could be used as selection criteria in dairy goat breeding.Keywords: fertility, growth, milk yield, Saanen goats
Procedia PDF Downloads 166347 Wood as a Climate Buffer in a Supermarket
Authors: Kristine Nore, Alexander Severnisen, Petter Arnestad, Dimitris Kraniotis, Roy Rossebø
Abstract:
Natural materials like wood, absorb and release moisture. Thus wood can buffer indoor climate. When used wisely, this buffer potential can be used to counteract the outer climate influence on the building. The mass of moisture used in the buffer is defined as the potential hygrothermal mass, which can be an energy storage in a building. This works like a natural heat pump, where the moisture is active in damping the diurnal changes. In Norway, the ability of wood as a material used for climate buffering is tested in several buildings with the extensive use of wood, including supermarkets. This paper defines the potential of hygrothermal mass in a supermarket building. This includes the chosen ventilation strategy, and how the climate impact of the building is reduced. The building is located above the arctic circle, 50m from the coastline, in Valnesfjord. It was built in 2015, has a shopping area, including toilet and entrance, of 975 m². The climate of the area is polar according to the Köppen classification, but the supermarket still needs cooling on hot summer days. In order to contribute to the total energy balance, wood needs dynamic influence to activate its hygrothermal mass. Drying and moistening of the wood are energy intensive, and this energy potential can be exploited. Examples are to use solar heat for drying instead of heating the indoor air, and raw air with high enthalpy that allow dry wooden surfaces to absorb moisture and release latent heat. Weather forecasts are used to define the need for future cooling or heating. Thus, the potential energy buffering of the wood can be optimized with intelligent ventilation control. The ventilation control in Valnesfjord includes the weather forecast and historical data. That is a five-day forecast and a two-day history. This is to prevent adjustments to smaller weather changes. The ventilation control has three zones. During summer, the moisture is retained to dampen for solar radiation through drying. In the winter time, moist air let into the shopping area to contribute to the heating. When letting the temperature down during the night, the moisture absorbed in the wood slow down the cooling. The ventilation system is shut down during closing hours of the supermarket in this period. During the autumn and spring, a regime of either storing the moisture or drying out to according to the weather prognoses is defined. To ensure indoor climate quality, measurements of CO₂ and VOC overrule the low energy control if needed. Verified simulations of the Valnesfjord building will build a basic model for investigating wood as a climate regulating material also in other climates. Future knowledge on hygrothermal mass potential in materials is promising. When including the time-dependent buffer capacity of materials, building operators can achieve optimal efficiency of their ventilation systems. The use of wood as a climate regulating material, through its potential hygrothermal mass and connected to weather prognoses, may provide up to 25% energy savings related to heating, cooling, and ventilation of a building.Keywords: climate buffer, energy, hygrothermal mass, ventilation, wood, weather forecast
Procedia PDF Downloads 215346 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools
Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami
Abstract:
The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design
Procedia PDF Downloads 76345 Analyzing Global User Sentiments on Laptop Features: A Comparative Study of Preferences Across Economic Contexts
Authors: Mohammadreza Bakhtiari, Mehrdad Maghsoudi, Hamidreza Bakhtiari
Abstract:
The widespread adoption of laptops has become essential to modern lifestyles, supporting work, education, and entertainment. Social media platforms have emerged as key spaces where users share real-time feedback on laptop performance, providing a valuable source of data for understanding consumer preferences. This study leverages aspect-based sentiment analysis (ABSA) on 1.5 million tweets to examine how users from developed and developing countries perceive and prioritize 16 key laptop features. The analysis reveals that consumers in developing countries express higher satisfaction overall, emphasizing affordability, durability, and reliability. Conversely, users in developed countries demonstrate more critical attitudes, especially toward performance-related aspects such as cooling systems, battery life, and chargers. The study employs a mixed-methods approach, combining ABSA using the PyABSA framework with expert insights gathered through a Delphi panel of ten industry professionals. Data preprocessing included cleaning, filtering, and aspect extraction from tweets. Universal issues such as battery efficiency and fan performance were identified, reflecting shared challenges across markets. However, priorities diverge between regions, while users in developed countries demand high-performance models with advanced features, those in developing countries seek products that offer strong value for money and long-term durability. The findings suggest that laptop manufacturers should adopt a market-specific strategy by developing differentiated product lines. For developed markets, the focus should be on cutting-edge technologies, enhanced cooling solutions, and comprehensive warranty services. In developing markets, emphasis should be placed on affordability, versatile port options, and robust designs. Additionally, the study highlights the importance of universal charging solutions and continuous sentiment monitoring to adapt to evolving consumer needs. This research offers practical insights for manufacturers seeking to optimize product development and marketing strategies for global markets, ensuring enhanced user satisfaction and long-term competitiveness. Future studies could explore multi-source data integration and conduct longitudinal analyses to capture changing trends over time.Keywords: consumer behavior, durability, laptop industry, sentiment analysis, social media analytics
Procedia PDF Downloads 15344 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 143343 Use of Misoprostol in Pregnancy Termination in the Third Trimester: Oral versus Vaginal Route
Authors: Saimir Cenameri, Arjana Tereziu, Kastriot Dallaku
Abstract:
Introduction: Intra-uterine death is a common problem in obstetrical practice, and can lead to complications if left to resolve spontaneously. The cervix is unprepared, making inducing of labor difficult. Misoprostol is a synthetic prostaglandin E1 analogue, inexpensive, and is presented valid thanks to its ability to bring about changes in the cervix that lead to the induction of uterine contractions. Misoprostol is quickly absorbed when taken orally, resulting in high initial peak serum concentrations compared with the vaginal route. The vaginal misoprostol peak serum concentration is not as high and demonstrates a more gradual serum concentration decline. This is associated with many benefits for the patient; fast induction of labor; smaller doses; and fewer side effects (dose-depended). Mostly it has been used the regime of 50 μg/4 hour, with a high percentage of success and limited side effects. Objective: Evaluation of the efficiency of the use of oral and vaginal misoprostol in inducing labor, and comparing it with its use not by a previously defined protocol. Methods: Participants in this study included patients at U.H.O.G. 'Koco Gliozheni', Tirana from April 2004-July 2006, presenting with an indication for inducing labor in the third trimester for pregnancy termination. A total of 37 patients were randomly admitted for birth inducing activity, according to protocol (26), oral or vaginal protocol (10 vs. 16), and a control group (11), not subject to the protocol, was created. Oral or vaginal misoprostol was administered at a dose of 50 μg/4 h, while the fourth group participants were treated individually by the members of the medical staff. The main result of interest was the time between induction of labor to birth. Kruskal-Wallis test was used to compare the average age, parity, women weight, gestational age, Bishop's score, the size of the uterus and weight of the fetus between the four groups in the study. The Fisher exact test was used to compare day-stay and causes in the four groups. Mann-Whitney test was used to compare the time of the expulsion and the number of doses between oral and vaginal group. For all statistical tests used, the value of P ≤ 0.05 was considered statistically significant. Results: The four groups were comparable with regard to woman age and weight, parity, abortion indication, Bishop's score, fetal weight and the gestational age. There was significant difference in the percentage of deliveries within 24 hours. The average time from induction to birth per route (vaginal, oral, according to protocol and not according to the protocol) was respectively; 10.43h; 21.10h; 15.77h, 21.57h. There was no difference in maternal complications in groups. Conclusions: Use of vaginal misoprostol for inducing labor in the third trimester for termination of pregnancy appears to be more effective than the oral route, and even more to uses not according to the protocols approved before, where complications are greater and unjustified.Keywords: inducing labor, misoprostol, pregnancy termination, third trimester
Procedia PDF Downloads 185342 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78341 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis
Authors: Mohamed Ali Abdennadher
Abstract:
Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology
Procedia PDF Downloads 29340 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students
Authors: Majed A. Alsalem
Abstract:
In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.Keywords: deaf and hard of hearing, digital books, literacy, technology
Procedia PDF Downloads 487339 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 140338 Deep Learning Based on Image Decomposition for Restoration of Intrinsic Representation
Authors: Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Kensuke Nakamura, Dongeun Choi, Byung-Woo Hong
Abstract:
Artefacts are commonly encountered in the imaging process of clinical computed tomography (CT) where the artefact refers to any systematic discrepancy between the reconstructed observation and the true attenuation coefficient of the object. It is known that CT images are inherently more prone to artefacts due to its image formation process where a large number of independent detectors are involved, and they are assumed to yield consistent measurements. There are a number of different artefact types including noise, beam hardening, scatter, pseudo-enhancement, motion, helical, ring, and metal artefacts, which cause serious difficulties in reading images. Thus, it is desired to remove nuisance factors from the degraded image leaving the fundamental intrinsic information that can provide better interpretation of the anatomical and pathological characteristics. However, it is considered as a difficult task due to the high dimensionality and variability of data to be recovered, which naturally motivates the use of machine learning techniques. We propose an image restoration algorithm based on the deep neural network framework where the denoising auto-encoders are stacked building multiple layers. The denoising auto-encoder is a variant of a classical auto-encoder that takes an input data and maps it to a hidden representation through a deterministic mapping using a non-linear activation function. The latent representation is then mapped back into a reconstruction the size of which is the same as the size of the input data. The reconstruction error can be measured by the traditional squared error assuming the residual follows a normal distribution. In addition to the designed loss function, an effective regularization scheme using residual-driven dropout determined based on the gradient at each layer. The optimal weights are computed by the classical stochastic gradient descent algorithm combined with the back-propagation algorithm. In our algorithm, we initially decompose an input image into its intrinsic representation and the nuisance factors including artefacts based on the classical Total Variation problem that can be efficiently optimized by the convex optimization algorithm such as primal-dual method. The intrinsic forms of the input images are provided to the deep denosing auto-encoders with their original forms in the training phase. In the testing phase, a given image is first decomposed into the intrinsic form and then provided to the trained network to obtain its reconstruction. We apply our algorithm to the restoration of the corrupted CT images by the artefacts. It is shown that our algorithm improves the readability and enhances the anatomical and pathological properties of the object. The quantitative evaluation is performed in terms of the PSNR, and the qualitative evaluation provides significant improvement in reading images despite degrading artefacts. The experimental results indicate the potential of our algorithm as a prior solution to the image interpretation tasks in a variety of medical imaging applications. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: auto-encoder neural network, CT image artefact, deep learning, intrinsic image representation, noise reduction, total variation
Procedia PDF Downloads 190337 Application of Multidimensional Model of Evaluating Organisational Performance in Moroccan Sport Clubs
Authors: Zineb Jibraili, Said Ouhadi, Jorge Arana
Abstract:
Introduction: Organizational performance is recognized by some theorists as one-dimensional concept, and by others as multidimensional. This concept, which is already difficult to apply in traditional companies, is even harder to identify, to measure and to manage when voluntary organizations are concerned, essentially because of the complexity of that form of organizations such as sport clubs who are characterized by the multiple goals and multiple constituencies. Indeed, the new culture of professionalization and modernization around organizational performance emerges new pressures from the state, sponsors, members and other stakeholders which have required these sport organizations to become more performance oriented, or to build their capacity in order to better manage their organizational performance. The evaluation of performance can be made by evaluating the input (e.g. available resources), throughput (e.g. processing of the input) and output (e.g. goals achieved) of the organization. In non-profit organizations (NPOs), questions of performance have become increasingly important in the world of practice. To our knowledge, most of studies used the same methods to evaluate the performance in NPSOs, but no recent study has proposed a club-specific model. Based on a review of the studies that specifically addressed the organizational performance (and effectiveness) of NPSOs at operational level, the present paper aims to provide a multidimensional framework in order to understand, analyse and measure organizational performance of sport clubs. This paper combines all dimensions founded in literature and chooses the most suited of them to our model that we will develop in Moroccan sport clubs case. Method: We propose to implicate our unified model of evaluating organizational performance that takes into account all the limitations found in the literature. On a sample of Moroccan sport clubs ‘Football, Basketball, Handball and Volleyball’, for this purpose we use a qualitative study. The sample of our study comprises data from sport clubs (football, basketball, handball, volleyball) participating on the first division of the professional football league over the period from 2011 to 2016. Each football club had to meet some specific criteria in order to be included in the sample: 1. Each club must have full financial data published in their annual financial statements, audited by an independent chartered accountant. 2. Each club must have sufficient data. Regarding their sport and financial performance. 3. Each club must have participated at least once in the 1st division of the professional football league. Result: The study showed that the dimensions that constitute the model exist in the field with some small modifications. The correlations between the different dimensions are positive. Discussion: The aim of this study is to test the unified model emerged from earlier and narrower approaches for Moroccan case. Using the input-throughput-output model for the sketch of efficiency, it was possible to identify and define five dimensions of organizational effectiveness applied to this field of study.Keywords: organisational performance, model multidimensional, evaluation organizational performance, sport clubs
Procedia PDF Downloads 323336 Bioactive Substances-Loaded Water-in-Oil/Oil-in-Water Emulsions for Dietary Supplementation in the Elderly
Authors: Agnieszka Markowska-Radomska, Ewa Dluska
Abstract:
Maintaining a bioactive substances dense diet is important for the elderly, especially to prevent diseases and to support healthy ageing. Adequate bioactive substances intake can reduce the risk of developing chronic diseases (e.g. cardiovascular, osteoporosis, neurodegenerative syndromes, diseases of the oral cavity, gastrointestinal (GI) disorders, diabetes, and cancer). This can be achieved by introducing a comprehensive supplementation of components necessary for the proper functioning of the ageing body. The paper proposes the multiple emulsions of the W1/O/W2 (water-in-oil-in-water) type as carriers for effective co-encapsulation and co-delivery of bioactive substances in supplementation of the elderly. Multiple emulsions are complex structured systems ("drops in drops"). The functional structure of the W1/O/W2 emulsion enables (i) incorporation of one or more bioactive components (lipophilic and hydrophilic); (ii) enhancement of stability and bioavailability of encapsulated substances; (iii) prevention of interactions between substances, as well as with the external environment, delivery to a specific location; and (iv) release in a controlled manner. The multiple emulsions were prepared by a one-step method in the Couette-Taylor flow (CTF) contactor in a continuous manner. In general, a two-step emulsification process is used to obtain multiple emulsions. The paper contains a proposal of emulsion functionalization by introducing pH-responsive biopolymer—carboxymethylcellulose sodium salt (CMC-Na) to the external phase, which made it possible to achieve a release of components controlled by the pH of the gastrointestinal environment. The membrane phase of emulsions was soybean oil. The W1/O/W2 emulsions were evaluated for their characteristics (drops size/drop size distribution, volume packing fraction), encapsulation efficiency and stability during storage (to 30 days) at 4ºC and 25ºC. Also, the in vitro multi-substance co-release process were investigated in a simulated gastrointestinal environment (different pH and composition of release medium). Three groups of stable multiple emulsions were obtained: emulsions I with co-encapsulated vitamins B12, B6 and resveratrol; emulsions II with vitamin A and β-carotene; and emulsions III with vitamins C, E and D3. The substances were encapsulated in the appropriate emulsion phases depending on the solubility. For all emulsions, high encapsulation efficience (over 95%) and high volume packing fraction of internal droplets (0.54-0.76) were reached. In addition, due to the presence of a polymer (CMC-Na) with adhesive properties, high encapsulation stability during emulsions storage were achieved. The co-release study of encapsulated bioactive substances confirmed the possibility to modify the release profiles. It was found that the releasing process can be controlled through the composition, structure, physicochemical parameters of emulsions and pH of the release medium. The results showed that the obtained multiple emulsions might be used as potential liquid complex carriers for controlled/modified/site-specific co-delivery of bioactive substances in dietary supplementation in the elderly.Keywords: bioactive substance co-release, co-encapsulation, elderly supplementation, multiple emulsion
Procedia PDF Downloads 198335 Targeting Glucocorticoid Receptor Eliminate Dormant Chemoresistant Cancer Stem Cells in Glioblastoma
Authors: Aoxue Yang, Weili Tian, Haikun Liu
Abstract:
Brain tumor stem cells (BTSCs) are resistant to therapy and give rise to recurrent tumors. These rare and elusive cells are likely to disseminate during cancer progression, and some may enter dormancy, remaining viable but not increasing. The identification of dormant BTSCs is thus necessary to design effective therapies for glioblastoma (GBM) patients. Glucocorticoids (GCs) are used to treat GBM-associated edema. However, glucocorticoids participate in the physiological response to psychosocial stress, linked to poor cancer prognosis. This raises concern that glucocorticoids affect the tumor and BTSCs. Identifying markers specifically expressed by brain tumor stem cells (BTSCs) may enable specific therapies that spare their regular tissue-resident counterparts. By ribosome profiling analysis, we have identified that glycerol-3-phosphate dehydrogenase 1 (GPD1) is expressed by dormant BTSCs but not by NSCs. Through different stress-induced experiments in vitro, we found that only dexamethasone (DEXA) can significantly increase the expression of GPD1 in NSCs. Adversely, mifepristone (MIFE) which is classified as glucocorticoid receptors antagonists, could decrease GPD1 protein level and weaken the proliferation and stemness in BTSCs. Furthermore, DEXA can induce GPD1 expression in tumor-bearing mice brains and shorten animal survival, whereas MIFE has a distinct adverse effect that prolonged mice lifespan. Knocking out GR in NSC can block the upregulation of GPD1 inducing by DEXA, and we find the specific sequences on GPD1 promotor combined with GR, thus improving the efficiency of GPD1 transcription from CHIP-Seq. Moreover, GR and GPD1 are highly co-stained on GBM sections obtained from patients and mice. All these findings confirmed that GR could regulate GPD1 and loss of GPD1 Impairs Multiple Pathways Important for BTSCs Maintenance GPD1 is also a critical enzyme regulating glycolysis and lipid synthesis. We observed that DEXA and MIFE could change the metabolic profiles of BTSCs by regulating GPD1 to shift the transition of cell dormancy. Our transcriptome and lipidomics analysis demonstrated that cell cycle signaling and phosphoglycerides synthesis pathways contributed a lot to the inhibition of GPD1 caused by MIFE. In conclusion, our findings raise concern that treatment of GBM with GCs may compromise the efficacy of chemotherapy and contribute to BTSC dormancy. Inhibition of GR can dramatically reduce GPD1 and extend the survival duration of GBM-bearing mice. The molecular link between GPD1 and GR may give us an attractive therapeutic target for glioblastoma.Keywords: cancer stem cell, dormancy, glioblastoma, glycerol-3-phosphate dehydrogenase 1, glucocorticoid receptor, dexamethasone, RNA-sequencing, phosphoglycerides
Procedia PDF Downloads 132334 Malaysia as a Case Study for Climate Policy Integration into Energy Policy
Authors: Marcus Lee
Abstract:
The energy sector is the largest contributor of greenhouse gas emissions in Malaysia, which induces climate change. The climate change problem is therefore an energy sector problem. Tackling climate change issues successfully is contingent on actions taken in the energy sector. The researcher propounds that ‘Climate Policy Integration’ (CPI) into energy policy is a viable and insufficiently developed strategy in Malaysia that promotes the synergies between climate change and energy objectives, in order to achieve the targets found in both climate change and energy policies. In exploring this hypothesis, this paper presentation will focus on two particular aspects. Firstly, the meaning of CPI as an approach and as a concept will be explored. As an approach, CPI into energy policy means the integration of climate change objectives into the energy policy area. Its subject matter focuses on establishing the functional interrelations between climate change and energy objectives, by promoting their synergies and minimising their contradictions. However, its conceptual underpinnings are less than straightforward. Drawing from the ‘principle of integration’ found in international treaties and declarations such as the Stockholm Declaration 1972, the Rio Declaration 1992 and the United Nations Framework on Climate Change 1992 (‘UNFCCC’), this paper presentation will explore the contradictions in international standards on how the sustainable development tenets of environmental sustainability, social development and economic development are to be balanced and its relevance to CPI. Further, the researcher will consider whether authority may be derived from international treaties and declarations in order to argue for the prioritisation of environmental sustainability over the other sustainable development tenets through CPI. Secondly, this paper presentation will also explore the degree to which CPI into energy policy has been achieved and pursued in Malaysia. In particular, the strength of the conceptual framework with regard to CPI in Malaysian governance will be considered by assessing Malaysia’s National Policy on Climate Change (2009) (‘NPCC 2009’). The development (or the lack of) of CPI as an approach since the publication of the NPCC 2009 will also be assessed based on official government documents and policies that may have a climate change and/or energy agenda. Malaysia’s National Renewable Energy Policy and Action Plan (2010), draft National Energy Efficiency Action Plan (2014), Intended Nationally Determined Contributions (2015) in relation to the Paris Agreement, 11th Malaysia Plan (2015) and Biennial Update Report to the UNFCCC (2015) will be discussed. These documents will be assessed for the presence of CPI based on the language/drafting of the documents as well as the degree of subject matter regarding CPI expressed in the documents. Based on the analysis, the researcher will propose solutions on how to improve Malaysia’s climate change and energy governance. The theory of reflexive governance will be applied to CPI. The concluding remarks will be about whether CPI reflects reflexive governance by demonstrating how the governance process can be the object of shaping outcomes.Keywords: climate policy integration, mainstreaming, policy coherence, Malaysian energy governance
Procedia PDF Downloads 198333 Vibration and Freeze-Thaw Cycling Tests on Fuel Cells for Automotive Applications
Authors: Gema M. Rodado, Jose M. Olavarrieta
Abstract:
Hydrogen fuel cell technologies have experienced a great boost in the last decades, significantly increasing the production of these devices for both stationary and portable (mainly automotive) applications; these are influenced by two main factors: environmental pollution and energy shortage. A fuel cell is an electrochemical device that converts chemical energy directly into electricity by using hydrogen and oxygen gases as reactive components and obtaining water and heat as byproducts of the chemical reaction. Fuel cells, specifically those of Proton Exchange Membrane (PEM) technology, are considered an alternative to internal combustion engines, mainly because of the low emissions they produce (almost zero), high efficiency and low operating temperatures (< 373 K). The introduction and use of fuel cells in the automotive market requires the development of standardized and validated procedures to test and evaluate their performance in different environmental conditions including vibrations and freeze-thaw cycles. These situations of vibration and extremely low/high temperatures can affect the physical integrity or even the excellent operation or performance of the fuel cell stack placed in a vehicle in circulation or in different climatic conditions. The main objective of this work is the development and validation of vibration and freeze-thaw cycling test procedures for fuel cell stacks that can be used in a vehicle in order to consolidate their safety, performance, and durability. In this context, different experimental tests were carried out at the facilities of the National Hydrogen Centre (CNH2). The experimental equipment used was: A vibration platform (shaker) for vibration test analysis on fuel cells in three axes directions with different vibration profiles. A walk-in climatic chamber to test the starting, operating, and stopping behavior of fuel cells under defined extreme conditions. A test station designed and developed by the CNH2 to test and characterize PEM fuel cell stacks up to 10 kWe. A 5 kWe PEM fuel cell stack in off-operation mode was used to carry out two independent experimental procedures. On the one hand, the fuel cell was subjected to a sinusoidal vibration test on the shaker in the three axes directions. It was defined by acceleration and amplitudes in the frequency range of 7 to 200 Hz for a total of three hours in each direction. On the other hand, the climatic chamber was used to simulate freeze-thaw cycles by defining a temperature range between +313 K and -243 K with an average relative humidity of 50% and a recommended ramp up and rump down of 1 K/min. The polarization curve and gas leakage rate were determined before and after the vibration and freeze-thaw tests at the fuel cell stack test station to evaluate the robustness of the stack. The results were very similar, which indicates that the tests did not affect the fuel cell stack structure and performance. The proposed procedures were verified and can be used as an initial point to perform other tests with different fuel cells.Keywords: climatic chamber, freeze-thaw cycles, PEM fuel cell, shaker, vibration tests
Procedia PDF Downloads 117332 Influence of Disintegration of Sida hermaphrodita Silage on Methane Fermentation Efficiency
Authors: Marcin Zielinski, Marcin Debowski, Paulina Rusanowska, Magda Dudek
Abstract:
As a result of sonification, the destruction of complex biomass structures results in an increase in the biogas yield from the conditioned material. First, the amount of organic matter released into the solution due to disintegration was determined. This parameter was determined by changes in the carbon content in liquid phase of the conditioned substrate. The amount of carbon in the liquid phase increased with the prolongation of the sonication time to 16 min. Further increase in the duration of sonication did not cause a statistically significant increase in the amount of organic carbon in the liquid phase. The disintegrated material was then used for respirometric measurements for determination of the impact of the conditioning process used on methane fermentation effectiveness. The relationship between the amount of energy introduced into the lignocellulosic substrate and the amount of biogas produced has been demonstrated. Statistically significant increase in the amount of biogas was observed until sonication of 16 min. Further increase in energy in the conditioning process did not significantly increase the production of biogas from the treated substrate. The biogas production from the conditioned substrate was 17% higher than from the reference biomass at that time. The ultrasonic disintegration method did not significantly affect the observed biogas composition. In all series, the methane content in the produced biogas from the conditioned substrate was similar to that obtained with the raw substrate sample (51.1%). Another method of substrate conditioning was hydrothermal depolymerization. This method consists in application of increased temperature and pressure to substrate. These phenomena destroy the structure of the processed material, the release of organic compounds to the solution, which should lead to increase the amount of produced biogas from such treated biomass. The hydrothermal depolymerization was conducted using an innovative microwave heating method. Control measurements were performed using conventional heating. The obtained results indicate the relationship between depolymerization temperature and the amount of biogas. Statistically significant value of the biogas production coefficients increased as the depolymerization temperature increased to 150°C. Further raising the depolymerization temperature to 180°C did not significantly increase the amount of produced biogas in the respirometric tests. As a result of the hydrothermal depolymerization obtained using microwave at 150°C for 20 min, the rate of biogas production from the Sida silage was 780 L/kg VS, which accounted for nearly 50% increase compared to 370 L/kg VS obtained from the same silage but not depolymerised. The study showed that by microwave heating it is possible to effectively depolymerized substrate. Significant differences occurred especially in the temperature range of 130-150ºC. The pre-treatment of Sida hermaphrodita silage (biogas substrate) did not significantly affect the quality of the biogas produced. The methane concentration was about 51.5% on average. The study was carried out in the framework of the project under program BIOSTRATEG funded by the National Centre for Research and Development No. 1/270745/2/NCBR/2015 'Dietary, power, and economic potential of Sida hermaphrodita cultivation on fallow land'.Keywords: disintegration, biogas, methane fermentation, Virginia fanpetals, biomass
Procedia PDF Downloads 309331 Exploratory Study on Mediating Role of Commitment-to-Change in Relations between Employee Voice, Employee Involvement and Organizational Change Readiness
Authors: Rohini Sharma, Chandan Kumar Sahoo, Rama Krishna Gupta Potnuru
Abstract:
Strong competitive forces and requirements to achieve efficiency are forcing the organizations to realize the necessity and inevitability of change. What's more, the trend does not appear to be abating. Researchers have estimated that about two thirds of change project fails. Empirical evidences further shows that organizations invest significantly in the planned change but people side is accounted for in a token or instrumental way, which is identified as one of the important reason, why change endeavours fail. However, whatever be the reason for change, organizational change readiness must be gauged prior to the institutionalization of organizational change. Hence, in this study the influence of employee voice and employee involvement on organizational change readiness via commitment-to-change is examined, as it is an area yet to be extensively studied. Also, though a recent study has investigated the interrelationship between leadership, organizational change readiness and commitment to change, our study further examined these constructs in relation with employee voice and employee involvement that plays a consequential role for organizational change readiness. Further, integrated conceptual model weaving varied concepts relating to organizational readiness with focus on commitment to change as mediator was found to be an area, which required more theorizing and empirical validation, and this study rooted in an Indian public sector organization is a step in this direction. Data for the study were collected through a survey among employees of Rourkela Steel Plant (RSP), a unit of Steel Authority of India Limited (SAIL); the first integrated Steel Plant in the public sector in India, for which stratified random sampling method was adopted. The schedule was distributed to around 700 employees, out of which 516 complete responses were obtained. The pre-validated scales were used for the study. All the variables in the study were measured on a five-point Likert scale ranging from “strongly disagree (1)” to “strongly agree (5)”. Structural equation modeling (SEM) using AMOS 22 was used to examine the hypothesized model, which offers a simultaneous test of an entire system of variables in a model. The study results shows that inter-relationship between employee voice and commitment-to-change, employee involvement and commitment-to-change and commitment-to-change and organizational change readiness were significant. To test the mediation hypotheses, Baron and Kenny’s technique was used. Examination of direct and mediated effect of mediators confirmed that commitment-to-change partially mediated the relation between employee involvement and organizational change readiness. Furthermore, study results also affirmed that commitment-to-change does not mediate the relation between employee involvement and organizational change readiness. The empirical exploration therefore establishes that it is important to harness employee’s valuable suggestions regarding change for building organizational change readiness. Regarding employee involvement, it was found that sharing information and involving people in decision-making, leads to a creation of participative climate, which educes employee commitment during change and commitment-to-change further, fosters organizational change readiness.Keywords: commitment-to-change, change management, employee voice, employee involvement, organizational change readiness
Procedia PDF Downloads 327330 Optimization of Cobalt Oxide Conversion to Co-Based Metal-Organic Frameworks
Authors: Aleksander Ejsmont, Stefan Wuttke, Joanna Goscianska
Abstract:
Gaining control over particle shape, size and crystallinity is an ongoing challenge for many materials. Especially metalorganic frameworks (MOFs) are recently widely studied. Besides their remarkable porosity and interesting topologies, morphology has proven to be a significant feature. It can affect the further material application. Thus seeking new approaches that enable MOF morphology modulation is important. MOFs are reticular structures, where building blocks are made up of organic linkers and metallic nodes. The most common strategy of ensuring metal source is using salts, which usually exhibit high solubility and hinder morphology control. However, there has been a growing interest in using metal oxides as structure-directing agents towards MOFs due to their very low solubility and shape preservation. Metal oxides can be treated as a metal reservoir during MOF synthesis. Up to now, reports in which receiving MOFs from metal oxides mostly present ZnO conversion to ZIF-8. However, there are other oxides, for instance, Co₃O₄, which often is overlooked due to their structural stability and insolubility in aqueous solutions. Cobalt-based materials are famed for catalytic activity. Therefore the development of their efficient synthesis is worth attention. In the presented work, an optimized Co₃O₄transition to Co-MOFviaa solvothermal approach was proposed. The starting point of the research was the synthesis of Co₃O₄ flower petals and needles under hydrothermal conditions using different cobalt salts (e.g., cobalt(II) chloride and cobalt(II) nitrate), in the presence of urea, and hexadecyltrimethylammonium bromide (CTAB) surfactant as a capping agent. After receiving cobalt hydroxide, the calcination process was performed at various temperatures (300–500 °C). Then cobalt oxides as a source of cobalt cations were subjected to reaction with trimesic acid in solvothermal environment and temperature of 120 °C leading to Co-MOF fabrication. The solution maintained in the system was a mixture of water, dimethylformamide, and ethanol, with the addition of strong acids (HF and HNO₃). To establish how solvents affect metal oxide conversion, several different solvent ratios were also applied. The materials received were characterized with analytical techniques, including X-ray powder diffraction, energy dispersive spectroscopy,low-temperature nitrogen adsorption/desorption, scanning, and transmission electron microscopy. It was confirmed that the synthetic routes have led to the formation of Co₃O₄ and Co-based MOF varied in shape and size of particles. The diffractograms showed receiving crystalline phase for Co₃O₄, and also for Co-MOF. The Co₃O₄ obtained from nitrates and with using low-temperature calcination resulted in smaller particles. The study indicated that cobalt oxide particles of different size influence the efficiency of conversion and morphology of Co-MOF. The highest conversion was achieved using metal oxides with small crystallites.Keywords: Co-MOF, solvothermal synthesis, morphology control, core-shell
Procedia PDF Downloads 162329 Understanding the Impact of Resilience Training on Cognitive Performance in Military Personnel
Authors: Haji Mohammad Zulfan Farhi Bin Haji Sulaini, Mohammad Azeezudde’en Bin Mohd Ismaon
Abstract:
The demands placed on military athletes extend beyond physical prowess to encompass cognitive resilience in high-stress environments. This study investigates the effects of resilience training on the cognitive performance of military athletes, shedding light on the potential benefits and implications for optimizing their overall readiness. In a rapidly evolving global landscape, armed forces worldwide are recognizing the importance of cognitive resilience alongside physical fitness. The study employs a mixed-methods approach, incorporating quantitative cognitive assessments and qualitative data from military athletes undergoing resilience training programs. Cognitive performance is evaluated through a battery of tests, including measures of memory, attention, decision-making, and reaction time. The participants, drawn from various branches of the military, are divided into experimental and control groups. The experimental group undergoes a comprehensive resilience training program, while the control group receives traditional physical training without a specific focus on resilience. The initial findings indicate a substantial improvement in cognitive performance among military athletes who have undergone resilience training. These improvements are particularly evident in domains such as attention and decision-making. The experimental group demonstrated enhanced situational awareness, quicker problem-solving abilities, and increased adaptability in high-stress scenarios. These results suggest that resilience training not only bolsters mental toughness but also positively impacts cognitive skills critical to military operations. In addition to quantitative assessments, qualitative data is collected through interviews and surveys to gain insights into the subjective experiences of military athletes. Preliminary analysis of these narratives reveals that participants in the resilience training program report higher levels of self-confidence, emotional regulation, and an improved ability to manage stress. These psychological attributes contribute to their enhanced cognitive performance and overall readiness. Moreover, this study explores the potential long-term benefits of resilience training. By tracking participants over an extended period, we aim to assess the durability of cognitive improvements and their effects on overall mission success. Early results suggest that resilience training may serve as a protective factor against the detrimental effects of prolonged exposure to stressors, potentially reducing the risk of burnout and psychological trauma among military athletes. This research has significant implications for military organizations seeking to optimize the performance and well-being of their personnel. The findings suggest that integrating resilience training into the training regimen of military athletes can lead to a more resilient and cognitively capable force. This, in turn, may enhance mission success, reduce the risk of injuries, and improve the overall effectiveness of military operations. In conclusion, this study provides compelling evidence that resilience training positively impacts the cognitive performance of military athletes. The preliminary results indicate improvements in attention, decision-making, and adaptability, as well as increased psychological resilience. As the study progresses and incorporates long-term follow-ups, it is expected to provide valuable insights into the enduring effects of resilience training on the cognitive readiness of military athletes, contributing to the ongoing efforts to optimize military personnel's physical and mental capabilities in the face of ever-evolving challenges.Keywords: military athletes, cognitive performance, resilience training, cognitive enhancement program
Procedia PDF Downloads 80328 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin
Authors: Julio Jesus Salazar, Julio Jesus De Lama
Abstract:
the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.Keywords: hydrology, internet of things, machine learning, river basin
Procedia PDF Downloads 160327 Effect of Graded Level of Nano Selenium Supplementation on the Performance of Broiler Chicken
Authors: Raj Kishore Swain, Kamdev Sethy, Sumanta Kumar Mishra
Abstract:
Selenium is an essential trace element for the chicken with a variety of biological functions like growth, fertility, immune system, hormone metabolism, and antioxidant defense systems. Selenium deficiency in chicken causes exudative diathesis, pancreatic dystrophy and nutritional muscle dystrophy of the gizzard, heart and skeletal muscle. Additionally, insufficient immunity, lowering of production ability, decreased feathering of chickens and increased embryo mortality may occur due to selenium deficiency. Nano elemental selenium, which is bright red, highly stable, soluble and of nano meter size in the redox state of zero, has high bioavailability and low toxicity due to the greater surface area, high surface activity, high catalytic efficiency and strong adsorbing ability. To assess the effect of dietary nano-Se on performance and expression of gene in Vencobb broiler birds in comparison to its inorganic form (sodium selenite), four hundred fifty day-old Vencobb broiler chicks were randomly distributed into 9 dietary treatment groups with two replicates with 25 chicks per replicate. The dietary treatments were: T1 (Control group): Basal diet; T2: Basal diet with 0.3 ppm of inorganic Se; T3: Basal diet with 0.01875 ppm of nano-Se; T4: Basal diet with 0.0375 ppm of nano-Se; T5: Basal diet with 0.075 ppm of nano-Se, T6: Basal diet with 0.15 ppm of nano-Se, T7: Basal diet with 0.3 ppm of nano-Se, T8: Basal diet with 0.60 ppm of nano-Se, T9: Basal diet with 1.20 ppm of nano-Se. Nano selenium was synthesized by mixing sodium selenite with reduced glutathione and bovine serum albumin. The experiment was carried out in two phases: starter phase (0-3 wks), finisher phase (4-5 wk) in deep litter system. The body weight at the 5th week was best observed in T4. The best feed conversion ratio at the end of 5th week was observed in T4. Erythrocytic catalase, glutathione peroxidase and superoxide dismutase activity were significantly (P < 0.05) higher in all the nano selenium treated groups at 5th week. The antibody titers (log2) against Ranikhet diseases vaccine immunization of 5th-week broiler birds were significantly higher (P < 0.05) in the treatments T4 to T7. The selenium levels in liver, breast, kidney, brain, and gizzard were significantly (P < 0.05) increased with increasing dietary nano-Se indicating higher bioavailability of nano-Se compared to inorganic Se. The real time polymer chain reaction analysis showed an increase in the expression of antioxidative gene in T4 and T7 group. Therefore, it is concluded that supplementation of nano-selenium at 0.0375 ppm over and above the basal level can improve the body weight, antioxidant enzyme activity, Se bioavailability and expression of the antioxidative gene in broiler birds.Keywords: chicken, growth, immunity, nano selenium
Procedia PDF Downloads 177