Search results for: psychomotor performance
8386 Photoelectrochemical Water Splitting from Earth-Abundant CuO Thin Film Photocathode: Enhancing Performance and Photo-Stability through Deposition of Overlayers
Authors: Wilman Septina, Rajiv R. Prabhakar, Thomas Moehl, David Tilley
Abstract:
Cupric oxide (CuO) is a promising absorber material for the fabrication of scalable, low cost solar energy conversion devices, due to the high abundance and low toxicity of copper. It is a p-type semiconductor with a band gap of around 1.5 eV, absorbing a significant portion of the solar spectrum. One of the main challenges in using CuO as solar absorber in an aqueous system is its tendency towards photocorrosion, generating Cu2O and metallic Cu. Although there have been several reports of CuO as a photocathode for hydrogen production, it is unclear how much of the observed current actually corresponds to H2 evolution, as the inevitability of photocorrosion is usually not addressed. In this research, we investigated the effect of the deposition of overlayers onto CuO thin films for the purpose of enhancing its photostability as well as performance for water splitting applications. CuO thin film was fabricated by galvanic electrodeposition of metallic copper onto gold-coated FTO substrates, followed by annealing in air at 600 °C. Photoelectrochemical measurement of the bare CuO film using 1 M phosphate buffer (pH 6.9) under simulated AM 1.5 sunlight showed a current density of ca. 1.5 mA cm-2 (at 0.4 VRHE), which photocorroded to Cu metal upon prolonged illumination. This photocorrosion could be suppressed by deposition of 50 nm-thick TiO2, deposited by atomic layer deposition. In addition, we found that insertion of an n-type CdS layer, deposited by chemical bath deposition, between the CuO and TiO2 layers was able to enhance significantly the photocurrent compared to without the CdS layer. A photocurrent of over 2 mA cm-2 (at 0 VRHE) was observed using the photocathode stack FTO/Au/CuO/CdS/TiO2/Pt. Structural, electrochemical, and photostability characterizations of the photocathode as well as results on various overlayers will be presented.Keywords: CuO, hydrogen, photoelectrochemical, photostability, water splitting
Procedia PDF Downloads 2248385 A Conceptual Framework of the Individual and Organizational Antecedents to Knowledge Sharing
Authors: Muhammad Abdul Basit Memon
Abstract:
The importance of organizational knowledge sharing and knowledge management has been documented in numerous research studies in available literature, since knowledge sharing has been recognized as a founding pillar for superior organizational performance and a source of gaining competitive advantage. Built on this, most of the successful organizations perceive knowledge management and knowledge sharing as a concern of high strategic importance and spend huge amounts on the effective management and sharing of organizational knowledge. However, despite some very serious endeavors, many firms fail to capitalize on the benefits of knowledge sharing because of being unaware of the individual characteristics, interpersonal, organizational and contextual factors that influence knowledge sharing; simply the antecedent to knowledge sharing. The extant literature on antecedents to knowledge sharing, offers a range of antecedents mentioned in a number of research articles and research studies. Some of the previous studies about antecedents to knowledge sharing, studied antecedents to knowledge sharing regarding inter-organizational knowledge transfer; others focused on inter and intra organizational knowledge sharing and still others investigated organizational factors. Some of the organizational antecedents to KS can relate to the characteristics and underlying aspects of knowledge being shared e.g., specificity and complexity of the underlying knowledge to be transferred; others relate to specific organizational characteristics e.g., age and size of the organization, decentralization and absorptive capacity of the firm and still others relate to the social relations and networks of organizations such as social ties, trusting relationships, and value systems. In the same way some researchers have highlighted on only one aspect like organizational commitment, transformational leadership, knowledge-centred culture, learning and performance orientation and social network-based relationships in the organizations. A bulk of the existing research articles on antecedents to knowledge sharing has mainly discussed organizational or environmental factors affecting knowledge sharing. However, the focus, later on, shifted towards the analysis of individuals or personal determinants as antecedents for the individual’s engagement in knowledge sharing activities, like personality traits, attitude and self efficacy etc. For example, employees’ goal orientations (i.e. learning orientation or performance orientation is an important individual antecedent of knowledge sharing behaviour. While being consistent with the existing literature therefore, the antecedents to knowledge sharing can be classified as being individual and organizational. This paper is an endeavor to discuss a conceptual framework of the individual and organizational antecedents to knowledge sharing in the light of the available literature and empirical evidence. This model not only can help in getting familiarity and comprehension on the subject matter by presenting a holistic view of the antecedents to knowledge sharing as discussed in the literature, but can also help the business managers and especially human resource managers to find insights about the salient features of organizational knowledge sharing. Moreover, this paper can help provide a ground for research students and academicians to conduct both qualitative as well and quantitative research and design an instrument for conducting survey on the topic of individual and organizational antecedents to knowledge sharing.Keywords: antecedents to knowledge sharing, knowledge management, individual and organizational, organizational knowledge sharing
Procedia PDF Downloads 3248384 Structure-Activity Relationship of Gold Catalysts on Alumina Supported Cu-Ce Oxides for CO and Volatile Organic Compound Oxidation
Authors: Tatyana T. Tabakova, Elitsa N. Kolentsova, Dimitar Y. Dimitrov, Krasimir I. Ivanov, Yordanka G. Karakirova, Petya Cv. Petrova, Georgi V. Avdeev
Abstract:
The catalytic oxidation of CO and volatile organic compounds (VOCs) is considered as one of the most efficient ways to reduce harmful emissions from various chemical industries. The effectiveness of gold-based catalysts for many reactions of environmental significance was proven during the past three decades. The aim of this work was to combine the favorable features of Au and Cu-Ce mixed oxides in the design of new catalytic materials of improved efficiency and economic viability for removal of air pollutants in waste gases from formaldehyde production. Supported oxides of copper and cerium with Cu: Ce molar ratio 2:1 and 1:5 were prepared by wet impregnation of g-alumina. Gold (2 wt.%) catalysts were synthesized by a deposition-precipitation method. Catalysts characterization was carried out by texture measurements, powder X-ray diffraction, temperature programmed reduction and electron paramagnetic resonance spectroscopy. The catalytic activity in the oxidation of CO, CH3OH and (CH3)2O was measured using continuous flow equipment with fixed bed reactor. Both Cu-Ce/alumina samples demonstrated similar catalytic behavior. The addition of gold caused significant enhancement of CO and methanol oxidation activity (100 % degree of CO and CH3OH conversion at about 60 and 140 oC, respectively). The composition of Cu-Ce mixed oxides affected the performance of gold-based samples considerably. Gold catalyst on Cu-Ce/γ-Al2O3 1:5 exhibited higher activity for CO and CH3OH oxidation in comparison with Au on Cu-Ce/γ-Al2O3 2:1. The better performance of Au/Cu-Ce 1:5 was related to the availability of highly dispersed gold particles and copper oxide clusters in close contact with ceria.Keywords: CO and VOCs oxidation, copper oxide, Ceria, gold catalysts
Procedia PDF Downloads 3188383 Innovating Electronics Engineering for Smart Materials Marketing
Authors: Muhammad Awais Kiani
Abstract:
The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.Keywords: electronics engineering, smart materials, marketing, power management
Procedia PDF Downloads 598382 Instructional Game in Teaching Algebra for High School Students: Basis for Instructional Intervention
Authors: Jhemson C. Elis, Alvin S. Magadia
Abstract:
Our world is full of numbers, shapes, and figures that illustrate the wholeness of a thing. Indeed, this statement signifies that mathematics is everywhere. Mathematics in its broadest sense helps people in their everyday life that is why in education it is a must to be taken by the students as a subject. The study aims to determine the profile of the respondents in terms of gender and age, performance of the control and experimental groups in the pretest and posttest, impact of the instructional game used as instructional intervention in teaching algebra for high school students, significant difference between the level of performance of the two groups of respondents in their pre–test and post–test results, and the instructional intervention can be proposed. The descriptive method was also utilized in this study. The use of the certain approach was to that it corresponds to the main objective of this research that is to determine the effectiveness of the instructional game used as an instructional intervention in teaching algebra for high school students. There were 30 students served as respondents, having an equal size of the sample of 15 each while a greater number of female teacher respondents which totaled 7 or 70 percent and male were 3 or 30 percent. The study recommended that mathematics teacher should conceptualize instructional games for the students to learn mathematics with fun and enjoyment while learning. Mathematics education program supervisor should give training for teachers on how to conceptualize mathematics intervention for the students learning. Meaningful activities must be provided to sustain the student’s interest in learning. Students must be given time to have fun at the classroom through playing while learning since mathematics for them was considered as difficult. Future researcher must continue conceptualizing some mathematics intervention to suffice the needs of the students, and teachers should inculcate more educational games so that the discussion will be successful and joyful.Keywords: instructional game in algebra, mathematical intervention, joyful, successful
Procedia PDF Downloads 5978381 Lessons of Passive Environmental Design in the Sarabhai and Shodan Houses by Le Corbusier
Authors: Juan Sebastián Rivera Soriano, Rosa Urbano Gutiérrez
Abstract:
The Shodan House and the Sarabhai House (Ahmedabad, India, 1954 and 1955, respectively) are considered some of the most important works of Le Corbusier produced in the last stage of his career. There are some academic publications that study the compositional and formal aspects of their architectural design, but there is no in-depth investigation into how the climatic conditions of this region were a determining factor in the design decisions implemented in these projects. This paper argues that Le Corbusier developed a specific architectural design strategy for these buildings based on scientific research on climate in the Indian context. This new language was informed by a pioneering study and interpretation of climatic data as a design methodology that would even involve the development of new design tools. This study investigated whether their use of climatic data meets values and levels of accuracy obtained with contemporary instruments and tools, such as Energy Plus weather data files and Climate Consultant. It also intended to find out if Le Corbusier's office’s intentions and decisions were indeed appropriate and efficient for those climate conditions by assessing these projects using BIM models and energy performance simulations from Design Builder. Accurate models were built using original historical data through archival research. The outcome is to provide a new understanding of the environment of these houses through the combination of modern building science and architectural history. The results confirm that in these houses, it was achieved a model of low energy consumption. This paper contributes new evidence not only on exemplary modern architecture concerned with environmental performance but also on how it developed progressive thinking in this direction.Keywords: bioclimatic architecture, Le Corbusier, Shodan, Sarabhai Houses
Procedia PDF Downloads 658380 A Review on the Hydrologic and Hydraulic Performances in Low Impact Development-Best Management Practices Treatment Train
Authors: Fatin Khalida Abdul Khadir, Husna Takaijudin
Abstract:
Bioretention system is one of the alternatives to approach the conventional stormwater management, low impact development (LID) strategy for best management practices (BMPs). Incorporating both filtration and infiltration, initial research on bioretention systems has shown that this practice extensively decreases runoff volumes and peak flows. The LID-BMP treatment train is one of the latest LID-BMPs for stormwater treatments in urbanized watersheds. The treatment train is developed to overcome the drawbacks that arise from conventional LID-BMPs and aims to enhance the performance of the existing practices. In addition, it is also used to improve treatments in both water quality and water quantity controls as well as maintaining the natural hydrology of an area despite the current massive developments. The objective of this paper is to review the effectiveness of the conventional LID-BMPS on hydrologic and hydraulic performances through column studies in different configurations. The previous studies on the applications of LID-BMP treatment train that were developed to overcome the drawbacks of conventional LID-BMPs are reviewed and use as the guidelines for implementing this system in Universiti Teknologi Petronas (UTP) and elsewhere. The reviews on the analysis conducted for hydrologic and hydraulic performances using the artificial neural network (ANN) model are done in order to be utilized in this study. In this study, the role of the LID-BMP treatment train is tested by arranging bioretention cells in series in order to be implemented for controlling floods that occurred currently and in the future when the construction of the new buildings in UTP completed. A summary of the research findings on the performances of the system is provided which includes the proposed modifications on the designs.Keywords: bioretention system, LID-BMP treatment train, hydrological and hydraulic performance, ANN analysis
Procedia PDF Downloads 1188379 Graphene-reinforced Metal-organic Framework Derived Cobalt Sulfide/Carbon Nanocomposites as Efficient Multifunctional Electrocatalysts
Authors: Yongde Xia, Laicong Deng, Zhuxian Yang
Abstract:
Developing cost-effective electrocatalysts for oxygen reduction reaction (ORR), oxygen evolution reaction (OER) and hydrogen evolution reaction (HER) is vital in energy conversion and storage applications. Herein, we report a simple method for the synthesis of graphene-reinforced cobalt sulfide/carbon nanocomposites and the evaluation of their electrocatalytic performance for typical electrocatalytic reactions. Nanocomposites of cobalt sulfide embedded in N, S co-doped porous carbon and graphene (CoS@C/Graphene) were generated via simultaneous sulfurization and carbonization of one-pot synthesized graphite oxide-ZIF-67 precursors. The obtained CoS@C/Graphene nanocomposite was characterized by X-ray diffraction, Raman spectroscopy, Thermogravimetric analysis-Mass spectroscopy, Scanning electronic microscopy, Transmission electronic microscopy, X-ray photoelectron spectroscopy and gas sorption. It was found that cobalt sulfide nanoparticles were homogenously dispersed in the in-situ formed N, S co-doped porous carbon/Graphene matrix. The CoS@C/10Graphene composite not only shows excellent electrocatalytic activity toward ORR with high onset potential of 0.89 V, four-electron pathway and superior durability of maintaining 98% current after continuously running for around 5 hours, but also exhibits good performance for OER and HER, due to the improved electrical conductivity, increased catalytic active sites and connectivity between the electrocatalytic active cobalt sulfide and the carbon matrix. This work offers a new approach for the development of novel multifunctional nanocomposites for the next generation of energy conversion and storage applications.Keywords: MOF derivative, graphene, electrocatalyst, oxygen reduction reaction, oxygen evolution reaction, hydrogen evolution reaction
Procedia PDF Downloads 508378 Development of Cost Effective Ultra High Performance Concrete by Using Locally Available Materials
Authors: Mohamed Sifan, Brabha Nagaratnam, Julian Thamboo, Keerthan Poologanathan
Abstract:
Ultra high performance concrete (UHPC) is a type of cementitious material known for its exceptional strength, ductility, and durability. However, its production is often associated with high costs due to the significant amount of cementitious materials required and the use of fine powders to achieve the desired strength. The aim of this research is to explore the feasibility of developing cost-effective UHPC mixes using locally available materials. Specifically, the study aims to investigate the use of coarse limestone sand along with other sand types, namely, basalt sand, dolomite sand, and river sand for developing UHPC mixes and evaluating its performances. The study utilises the particle packing model to develop various UHPC mixes. The particle packing model involves optimising the combination of coarse limestone sand, basalt sand, dolomite sand, and river sand to achieve the desired properties of UHPC. The developed UHPC mixes are then evaluated based on their workability (measured through slump flow and mini slump value), compressive strength (at 7, 28, and 90 days), splitting tensile strength, and microstructural characteristics analysed through scanning electron microscope (SEM) analysis. The results of this study demonstrate that cost-effective UHPC mixes can be developed using locally available materials without the need for silica fume or fly ash. The UHPC mixes achieved impressive compressive strengths of up to 149 MPa at 28 days with a cement content of approximately 750 kg/m³. The mixes also exhibited varying levels of workability, with slump flow values ranging from 550 to 850 mm. Additionally, the inclusion of coarse limestone sand in the mixes effectively reduced the demand for superplasticizer and served as a filler material. By exploring the use of coarse limestone sand and other sand types, this study provides valuable insights into optimising the particle packing model for UHPC production. The findings highlight the potential to reduce costs associated with UHPC production without compromising its strength and durability. The study collected data on the workability, compressive strength, splitting tensile strength, and microstructural characteristics of the developed UHPC mixes. Workability was measured using slump flow and mini slump tests, while compressive strength and splitting tensile strength were assessed at different curing periods. Microstructural characteristics were analysed through SEM and energy dispersive X-ray spectroscopy (EDS) analysis. The collected data were then analysed and interpreted to evaluate the performance and properties of the UHPC mixes. The research successfully demonstrates the feasibility of developing cost-effective UHPC mixes using locally available materials. The inclusion of coarse limestone sand, in combination with other sand types, shows promising results in achieving high compressive strengths and satisfactory workability. The findings suggest that the use of the particle packing model can optimise the combination of materials and reduce the reliance on expensive additives such as silica fume and fly ash. This research provides valuable insights for researchers and construction practitioners aiming to develop cost-effective UHPC mixes using readily available materials and an optimised particle packing approach.Keywords: cost-effective, limestone powder, particle packing model, ultra high performance concrete
Procedia PDF Downloads 1108377 The Effects of Dynamic Training Shoes Exercises on Isokinetic Strength Performance
Authors: Bergun Meric Bingul, Yezdan Cinel, Murat Son, Cigdem Bulgan, Mensure Aydin
Abstract:
The aim of this study was to determination of the effects of knee and hip isokinetic performance during the training with the special designed roller-shoes. 30 soccer players participated as subjects and these subjects were divided into 3 groups randomly. Training groups were; with the dynamic training shoes group, without the dynamic training shoes group and control group. Subjects were trained speed strength trainings during 8 weeks (3 days a week and 1 hour a day). 6 exercises were focused on the knee flexors and extensors, also hip adductor and abductor muscles were chosen and performed in 3x30secs at each sets. Control group was not paticipated to the training program. Before and after the training programs knee flexor and extensor muscles and hip abductor and adductor muscles’ peak torques were measured by Biodex III isokinetic dynamometer. Isokinetic strength data were analyzed by using SPSS program. A repeated measures analysis of variance (ANOVA) was used to determine differences among the peak torque values for three groups. The results indicated that soccer players’ peak torque values that the group of using the dynamic training shoes, were found higher. Also, hip adductor and abductor peak torques that the group of using the dynamic training shoes, were obtained better than the other groups. In conclusion, the ground friction forces are an important role of increasing strength. With these shoes, using rollers, soccer players were able to move easily because of the friction forces were reduced and created more range of motion. So, exercises were performed faster than before and strength movements in all angles, it ensured that the active state. This was resulted in a better use of force.Keywords: isokinetic, soccer, dynamic training shoes, training
Procedia PDF Downloads 2698376 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality
Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard
Abstract:
Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus
Procedia PDF Downloads 5738375 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 1288374 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil
Authors: Ana Julia C. Kfouri
Abstract:
A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort
Procedia PDF Downloads 3868373 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales
Authors: Philipp Sommer, Amgad Agoub
Abstract:
The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning
Procedia PDF Downloads 578372 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees
Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel
Abstract:
Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.Keywords: cloud storage, decision trees, diagnostic image, search, telemedicine
Procedia PDF Downloads 2048371 Studying the Bond Strength of Geo-Polymer Concrete
Authors: Rama Seshu Doguparti
Abstract:
This paper presents the experimental investigation on the bond behavior of geo polymer concrete. The bond behavior of geo polymer concrete cubes of grade M35 reinforced with 16 mm TMT rod is analyzed. The results indicate that the bond performance of reinforced geo polymer concrete is good and thus proves its application for construction.Keywords: geo-polymer, concrete, bond strength, behaviour
Procedia PDF Downloads 5088370 Eight Weeks of Suspension Systems Training on Fat Mass, Jump and Physical Fitness Index in Female
Authors: Che Hsiu Chen, Su Yun Chen, Hon Wen Cheng
Abstract:
Greater core stability may benefit sports performance by providing a foundation for greater force production in the upper and lower extremities. Core stability exercises on instability device (such as the TRX suspension systems) were found to be able to induce higher core muscle activity than performing on a stable surface. However, high intensity interval TRX suspension exercises training on sport performances remain unclear. The purpose of this study was to examine whether high intensity TRX suspension training could improve sport performance. Twenty-four healthy university female students (age 19.0 years, height 157.9 cm, body mass 51.3 kg, fat mass 25.2 %) were voluntarily participated in this study. After a familiarization session, each participant underwent five suspension exercises (e.g., hip abduction in plank alternative, hamstring curl, 45-degree row, lunge and oblique crunch). Each type of exercise was performed for 30 seconds, followed by 30 seconds break, two times per week for eight weeks while each exercise session was increased by 10 seconds every week. The results showed that the fat mass (about 12.92%) decreased significantly, sit and reach test (9%), 1 minute sit-up test (17.5%), standing broad jump (4.8%), physical fitness index (10.3%) increased significantly after 8-week high intensity TRX suspension training. Hence, eight weeks of high intensity interval TRX suspension exercises training can improve hamstring flexibility, trunk endurance, jump ability, aerobic fitness and fat mass percentage decreased substantially.Keywords: core endurance, jump, flexibility, cardiovascular fitness
Procedia PDF Downloads 4088369 Seismic Assessment of Passive Control Steel Structure with Modified Parameter of Oil Damper
Authors: Ahmad Naqi
Abstract:
Today, the passively controlled buildings are extensively becoming popular due to its excellent lateral load resistance circumstance. Typically, these buildings are enhanced with a damping device that has high market demand. Some manufacturer falsified the damping device parameter during the production to achieve the market demand. Therefore, this paper evaluates the seismic performance of buildings equipped with damping devices, which their parameter modified to simulate the falsified devices, intentionally. For this purpose, three benchmark buildings of 4-, 10-, and 20-story were selected from JSSI (Japan Society of Seismic Isolation) manual. The buildings are special moment resisting steel frame with oil damper in the longitudinal direction only. For each benchmark buildings, two types of structural elements are designed to resist the lateral load with and without damping devices (hereafter, known as Trimmed & Conventional Building). The target building was modeled using STERA-3D, a finite element based software coded for study purpose. Practicing the software one can develop either three-dimensional Model (3DM) or Lumped Mass model (LMM). Firstly, the seismic performance of 3DM and LMM models was evaluated and found excellent coincide for the target buildings. The simplified model of LMM used in this study to produce 66 cases for both of the buildings. Then, the device parameters were modified by ± 40% and ±20% to predict many possible conditions of falsification. It is verified that the building which is design to sustain the lateral load with support of damping device (Trimmed Building) are much more under threat as a result of device falsification than those building strengthen by damping device (Conventional Building).Keywords: passive control system, oil damper, seismic assessment, lumped mass model
Procedia PDF Downloads 1148368 Enhancing Students’ Performance in Basic Science and Technology in Nigeria Using Moodle LMS
Authors: Olugbade Damola, Adekomi Adebimbo, Sofowora Olaniyi Alaba
Abstract:
One of the major problems facing education in Nigeria is the provision of quality Science and Technology education. Inadequate teaching facilities, non-usage of innovative teaching strategies, ineffective classroom management, lack of students’ motivation and poor integration of ICT has resulted in the increase in percentage of students who failed Basic Science and Technology in Junior Secondary Certification Examination for National Examination Council in Nigeria. To address these challenges, the Federal Government came up with a road map on education. This was with a view of enhancing quality education through integration of modern technology into teaching and learning, enhancing quality assurance through proper monitoring and introduction of innovative methods of teaching. This led the researcher to investigate how MOODLE LMS could be used to enhance students’ learning outcomes in BST. A sample of 120 students was purposively selected from four secondary schools in Ogbomoso. The experimental group was taught using MOODLE LMS, while the control group was taught using the conventional method. Data obtained were analyzed using mean, standard deviation and t-test. The result showed that MOODLE LMS was an effective learning platform in teaching BST in junior secondary schools (t=4.953, P<0.05). Students’ attitudes towards BST was also enhanced through MOODLE LMS (t=15.632, P<0.05). The use of MOODLE LMS significantly enhanced students’ retention (t=6.640, P<0.05). In conclusion, the Federal Government efforts at enhancing quality assurance through integration of modern technology and e-learning in Secondary schools proved to have yielded good result has students found MOODLE LMS to be motivating and interactive. Attendance was improved.Keywords: basic science and technology, MOODLE LMS, performance, quality assurance
Procedia PDF Downloads 3038367 Design and Construction Validation of Pile Performance through High Strain Pile Dynamic Tests for both Contiguous Flight Auger and Drilled Displacement Piles
Authors: S. Pirrello
Abstract:
Sydney’s booming real estate market has pushed property developers to invest in historically “no-go” areas, which were previously too expensive to develop. These areas are usually near rivers where the sites are underlain by deep alluvial and estuarine sediments. In these ground conditions, conventional bored pile techniques are often not competitive. Contiguous Flight Auger (CFA) and Drilled Displacement (DD) Piles techniques are on the other hand suitable for these ground conditions. This paper deals with the design and construction challenges encountered with these piling techniques for a series of high-rise towers in Sydney’s West. The advantages of DD over CFA piles such as reduced overall spoil with substantial cost savings and achievable rock sockets in medium strength bedrock are discussed. Design performances were assessed with PIGLET. Pile performances are validated in two stages, during constructions with the interpretation of real-time data from the piling rigs’ on-board computer data, and after construction with analyses of results from high strain pile dynamic testing (PDA). Results are then presented and discussed. High Strain testing data are presented as Case Pile Wave Analysis Program (CAPWAP) analyses.Keywords: contiguous flight auger (CFA) , DEFPIG, case pile wave analysis program (CAPWAP), drilled displacement piles (DD), pile dynamic testing (PDA), PIGLET, PLAXIS, repute, pile performance
Procedia PDF Downloads 2828366 Integrating Flipped Instruction to Enhance Second Language Acquisition
Authors: Borja Ruiz de Arbulo Alonso
Abstract:
This paper analyzes the impact of flipped instruction in adult learners of Spanish as a second language in a face-to-face course at Boston University. Given the limited amount of contact hours devoted to studying world languages in the American higher education system, implementing strategies to free up classroom time for communicative language practice is key to ensure student success in their learning process. In an effort to improve the way adult learners acquire a second language, this paper examines the role that regular pre-class and web-based exposure to Spanish grammar plays in student performance at the end of the academic term. It outlines different types of web-based pre-class activities and compares this approach to more traditional classroom practice. To do so, this study works for three months with two similar groups of adult learners in an intermediate-level Spanish class. Both groups use the same course program and have the same previous language experience, but one receives an additional set of instructor-made online materials containing a variety of grammar explanations and online activities that need to be reviewed before attending class. Since the online activities cover material and concepts that have not yet been studied in class, students' oral and written production in both groups is measured by means of a writing activity and an audio recording at the end of the three-month period. These assessments will ascertain the effects of exposing the control group to the grammar of the target language prior to each lecture throughout and demonstrate where flipped instruction helps adult learners of Spanish achieve higher performance, but also identify potential problems.Keywords: educational technology, flipped classroom, second language acquisition, student success
Procedia PDF Downloads 1258365 Experiences of Trainee Teachers: A Survey on Expectations and Realities in Special Secondary Schools in Kenya
Authors: Mary Cheptanui Sambu
Abstract:
Teaching practice is an integral component of students who are training to be teachers, as it provides them with an opportunity to gain experience in an actual teaching and learning environment. This study explored the experiences of trainee teachers from a local university in Kenya, undergoing a three-month teaching practice in Special Secondary schools in the country. The main aim of the study was to understand the trainees’ experiences, their expectations, and the realities encountered during the teaching practice period. The study focused on special secondary schools for learners with hearing impairment. A descriptive survey design was employed and a sample size of forty-four respondents from special secondary schools for learners with hearing impairment was purposively selected. A questionnaire was administered to the respondents and the data obtained analysed using the Statistical Package for the Social Sciences (SPSS). Preliminary analysis shows that challenges facing special secondary schools include inadequate teaching and learning facilities and resources, low academic performance among learners with hearing impairment, an overloaded curriculum and inadequate number of teachers for the learners. The study findings suggest that the Kenyan government should invest more in the education of special needs children, particularly focusing on increasing the number of trained teachers. In addition, the education curriculum offered in special secondary schools should be tailored towards the needs and interest of learners. These research findings will be useful to policymakers and curriculum developers, and will provide information that can be used to enhance the education of learners with hearing impairment; this will lead to improved academic performance, consequently resulting in better transitions and the realization of Vision 2030.Keywords: hearing impairment, special secondary schools, trainee, teaching practice
Procedia PDF Downloads 1638364 Promotional Effects of Zn in Cu-Zn/Core-Shell Al-MCM-41 for Selective Catalytic Reduction of NO with NH3: Acidic Properties, NOx Adsorption Properties, and Nature of Copper
Authors: Thidarat Imyen, Paisan Kongkachuichay
Abstract:
Cu-Zn/core-shell Al-MCM-41 catalyst with various copper species, prepared by a combination of three methods—substitution, ion-exchange, and impregnation, was studied for the selective catalytic reduction (SCR) of NO with NH3 at 300 °C for 150 min. In order to investigate the effects of Zn introduction on the nature of the catalyst, Cu/core-shell Al-MCM-41 and Zn/core-shell Al-MCM-41 catalysts were also studied. The roles of Zn promoter in the acidity and the NOx adsorption properties of the catalysts were investigated by in situ Fourier transform infrared spectroscopy (FTIR) of NH3 and NOx adsorption, and temperature-programmed desorption (TPD) of NH3 and NOx. The results demonstrated that the acidity of the catalyst was enhanced by the Zn introduction, as exchanged Zn(II) cations loosely bonded with Al-O-Si framework could create Brønsted acid sites by interacting with OH groups. Moreover, Zn species also provided the additional sites for NO adsorption in the form of nitrite (NO2–) and nitrate (NO3–) species, which are the key intermediates for SCR reaction. In addition, the effect of Zn on the nature of copper was studied by in situ FTIR of CO adsorption and in situ X-ray adsorption near edge structure (XANES). It was found that Zn species hindered the reduction of Cu(II) to Cu(0), resulting in higher Cu(I) species in the Zn promoted catalyst. The Cu-Zn/core-shell Al-MCM-41 exhibited higher catalytic activity compared with that of the Cu/core-shell Al-MCM-41 for the whole reaction time, as it possesses the highest amount of Cu(I) sites, which are responsible for SCR catalytic activity. The Cu-Zn/core-shell Al-MCM-41 catalyst also reached the maximum NO conversion of 100% with the average NO conversion of 76 %. The catalytic performance of the catalyst was further improved by using Zn promoter in the form of ZnO instead of reduced Zn species. The Cu-ZnO/core-shell Al-MCM-41 catalyst showed better catalytic performance with longer working reaction time, and achieved the average NO conversion of 81%.Keywords: Al-MCM-41, copper, nitrogen oxide, selective catalytic reduction, zinc
Procedia PDF Downloads 3028363 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes
Authors: Angela U. Makolo
Abstract:
Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation
Procedia PDF Downloads 688362 Accurate Energy Assessment Technique for Mine-Water District Heat Network
Authors: B. Philip, J. Littlewood, R. Radford, N. Evans, T. Whyman, D. P. Jones
Abstract:
UK buildings and energy infrastructures are heavily dependent on natural gas, a large proportion of which is used for domestic space heating. However, approximately half of the gas consumed in the UK is imported. Improving energy security and reducing carbon emissions are major government drivers for reducing gas dependency. In order to do so there needs to be a wholesale shift in the energy provision to householders without impacting on thermal comfort levels, convenience or cost of supply to the end user. Heat pumps are seen as a potential alternative in modern well insulated homes, however, can the same be said of older homes? A large proportion of housing stock in Britain was built prior to 1919. The age of the buildings bears testimony to the quality of construction; however, their thermal performance falls far below the minimum currently set by UK building standards. In recent years significant sums of money have been invested to improve energy efficiency and combat fuel poverty in some of the most deprived areas of Wales. Increasing energy efficiency of older properties remains a significant challenge, which cannot be achieved through insulation and air-tightness interventions alone, particularly when alterations to historically important architectural features of the building are not permitted. This paper investigates the energy demand of pre-1919 dwellings in a former Welsh mining village, the feasibility of meeting that demand using water from the disused mine workings to supply a district heat network and potential barriers to success of the scheme. The use of renewable solar energy generation and storage technologies, both thermal and electrical, to reduce the load and offset increased electricity demand, are considered. A wholistic surveying approach to provide a more accurate assessment of total household heat demand is proposed. Several surveying techniques, including condition surveys, air permeability, heat loss calculations, and thermography were employed to provide a clear picture of energy demand. Additional insulation can bring unforeseen consequences which are detrimental to the fabric of the building, potentially leading to accelerated dilapidation of the asset being ‘protected’. Increasing ventilation should be considered in parallel, to compensate for the associated reduction in uncontrolled infiltration. The effectiveness of thermal performance improvements are demonstrated and the detrimental effects of incorrect material choice and poor installation are highlighted. The findings show estimated heat demand to be in close correlation to household energy bills. Major areas of heat loss were identified such that improvements to building thermal performance could be targeted. The findings demonstrate that the use of heat pumps in older buildings is viable, provided sufficient improvement to thermal performance is possible. Addition of passive solar thermal and photovoltaic generation can help reduce the load and running cost for the householder. The results were used to predict future heat demand following energy efficiency improvements, thereby informing the size of heat pumps required.Keywords: heat demand, heat pump, renewable energy, retrofit
Procedia PDF Downloads 928361 Overweight and Neurocognitive Functioning: Unraveling the Antagonistic Relationship in Adolescents
Authors: Swati Bajpai, S. P. K Jena
Abstract:
Background: There is dramatic increase in the prevalence and severity of overweight in adolescents, raising concerns about their psychosocial and cognitive consequences, thereby indicating the immediate need to understand the effects of increased weight on scholastic performance. Although the body of research is currently limited, available results have identified an inverse relationship between obesity and cognition in adolescents. Aim: to examine the association between increased Body Mass Index in adolescents and their neurocognitive functioning. Methods: A case –control study of 28 subjects in the age group of 11-17 years (14 Males and 14 females) was taken on the basis of main inclusion criteria (Body Mass Index). All of them were randomized to (experimental group: overweight) and (control group: normal weighted). A complete neurocognitive assessment was carried out using validated psychological scales namely, Color Progressive Matrices (to assess intelligence); Bender Visual Motor Gestalt Test (Perceptual motor functioning); PGI-Memory Scale for Children (memory functioning) and Malin’s Intelligence Scale Indian Children (verbal and performance ability). Results: statistical analysis of the results depicted that 57% of the experimental group lack in cognitive abilities, especially in general knowledge (99.1±12.0 vs. 102.8±6.7), working memory (91.5±8.4 vs. 93.1±8.7), concrete ability (82.3±11.5 vs. 92.6±1.7) and perceptual motor functioning (1.5±1.0 vs. 0.3±0.9) as compared to control group. Conclusion: Our investigations suggest that weight gain results, at least in part, from a neurological predisposition characterized by reduced executive function, and in turn obesity itself has a compounding negative impact on the brain. Though, larger sample is needed to make more affirmative claims.Keywords: adolescents, body mass index, neurocognition, obesity
Procedia PDF Downloads 4878360 Improving Patient-Care Services at an Oncology Center with a Flexible Adaptive Scheduling Procedure
Authors: P. Hooshangitabrizi, I. Contreras, N. Bhuiyan
Abstract:
This work presents an online scheduling problem which accommodates multiple requests of patients for chemotherapy treatments in a cancer center of a major metropolitan hospital in Canada. To solve the problem, an adaptive flexible approach is proposed which systematically combines two optimization models. The first model is intended to dynamically schedule arriving requests in the form of waiting lists whereas the second model is used to reschedule the already booked patients with the goal of finding better resource allocations when new information becomes available. Both models are created as mixed integer programming formulations. Various controllable and flexible parameters such as deviating the prescribed target dates by a pre-determined threshold, changing the start time of already booked appointments and the maximum number of appointments to move in the schedule are included in the proposed approach to have sufficient degrees of flexibility in handling arrival requests and unexpected changes. Several computational experiments are conducted to evaluate the performance of the proposed approach using historical data provided by the oncology clinic. Our approach achieves outstandingly better results as compared to those of the scheduling system being used in practice. Moreover, several analyses are conducted to evaluate the effect of considering different levels of flexibility on the obtained results and to assess the performance of the proposed approach in dealing with last-minute changes. We strongly believe that the proposed flexible adaptive approach is very well-suited for implementation at the clinic to provide better patient-care services and to utilize available resource more efficiently.Keywords: chemotherapy scheduling, multi-appointment modeling, optimization of resources, satisfaction of patients, mixed integer programming
Procedia PDF Downloads 1688359 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface
Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves
Abstract:
In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment
Procedia PDF Downloads 1348358 Variable Refrigerant Flow (VRF) Zonal Load Prediction Using a Transfer Learning-Based Framework
Authors: Junyu Chen, Peng Xu
Abstract:
In the context of global efforts to enhance building energy efficiency, accurate thermal load forecasting is crucial for both device sizing and predictive control. Variable Refrigerant Flow (VRF) systems are widely used in buildings around the world, yet VRF zonal load prediction has received limited attention. Due to differences between VRF zones in building-level prediction methods, zone-level load forecasting could significantly enhance accuracy. Given that modern VRF systems generate high-quality data, this paper introduces transfer learning to leverage this data and further improve prediction performance. This framework also addresses the challenge of predicting load for building zones with no historical data, offering greater accuracy and usability compared to pure white-box models. The study first establishes an initial variable set of VRF zonal building loads and generates a foundational white-box database using EnergyPlus. Key variables for VRF zonal loads are identified using methods including SRRC, PRCC, and Random Forest. XGBoost and LSTM are employed to generate pre-trained black-box models based on the white-box database. Finally, real-world data is incorporated into the pre-trained model using transfer learning to enhance its performance in operational buildings. In this paper, zone-level load prediction was integrated with transfer learning, and a framework was proposed to improve the accuracy and applicability of VRF zonal load prediction.Keywords: zonal load prediction, variable refrigerant flow (VRF) system, transfer learning, energyplus
Procedia PDF Downloads 288357 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 89