Search results for: COMSOL optimization simulation
89 Design of Ultra-Light and Ultra-Stiff Lattice Structure for Performance Improvement of Robotic Knee Exoskeleton
Authors: Bing Chen, Xiang Ni, Eric Li
Abstract:
With the population ageing, the number of patients suffering from chronic diseases is increasing, among which stroke is a high incidence for the elderly. In addition, there is a gradual increase in the number of patients with orthopedic or neurological conditions such as spinal cord injuries, nerve injuries, and other knee injuries. These diseases are chronic, with high recurrence and complications, and normal walking is difficult for such patients. Nowadays, robotic knee exoskeletons have been developed for individuals with knee impairments. However, the currently available robotic knee exoskeletons are generally developed with heavyweight, which makes the patients uncomfortable to wear, prone to wearing fatigue, shortening the wearing time, and reducing the efficiency of exoskeletons. Some lightweight materials, such as carbon fiber and titanium alloy, have been used for the development of robotic knee exoskeletons. However, this increases the cost of the exoskeletons. This paper illustrates the design of a new ultra-light and ultra-stiff truss type of lattice structure. The lattice structures are arranged in a fan shape, which can fit well with circular arc surfaces such as circular holes, and it can be utilized in the design of rods, brackets, and other parts of a robotic knee exoskeleton to reduce the weight. The metamaterial is formed by continuous arrangement and combination of small truss structure unit cells, which changes the diameter of the pillar section, geometrical size, and relative density of each unit cell. It can be made quickly through additive manufacturing techniques such as metal 3D printing. The unit cell of the truss structure is small, and the machined parts of the robotic knee exoskeleton, such as connectors, rods, and bearing brackets, can be filled and replaced by gradient arrangement and non-uniform distribution. Under the condition of satisfying the mechanical properties of the robotic knee exoskeleton, the weight of the exoskeleton is reduced, and hence, the patient’s wearing fatigue is relaxed, and the wearing time of the exoskeleton is increased. Thus, the efficiency and wearing comfort, and safety of the exoskeleton can be improved. In this paper, a brief description of the hardware design of the prototype of the robotic knee exoskeleton is first presented. Next, the design of the ultra-light and ultra-stiff truss type of lattice structures is proposed, and the mechanical analysis of the single-cell unit is performed by establishing the theoretical model. Additionally, simulations are performed to evaluate the maximum stress-bearing capacity and compressive performance of the uniform arrangement and gradient arrangement of the cells. Finally, the static analysis is performed for the cell-filled rod and the unmodified rod, respectively, and the simulation results demonstrate the effectiveness and feasibility of the designed ultra-light and ultra-stiff truss type of lattice structures. In future studies, experiments will be conducted to further evaluate the performance of the designed lattice structures.Keywords: additive manufacturing, lattice structures, metamaterial, robotic knee exoskeleton
Procedia PDF Downloads 10788 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems
Authors: Ibram Khalafalla Roshdy Shokry
Abstract:
This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA
Procedia PDF Downloads 2887 Knowledge Based Software Model for the Management and Treatment of Malaria Patients: A Case of Kalisizo General Hospital
Authors: Mbonigaba Swale
Abstract:
Malaria is an infection or disease caused by parasites (Plasmodium Falciparum — causes severe Malaria, plasmodium Vivax, Plasmodium Ovale, and Plasmodium Malariae), transmitted by bites of infected anopheles (female) mosquitoes to humans. These vectors comprise of two types in Africa, particularly in Uganda, i.e. anopheles fenestus and Anopheles gambaie (‘example Anopheles arabiensis,,); feeds on man inside the house mainly at dusk, mid-night and dawn and rests indoors and makes them effective transmitters (vectors) of the disease. People in both urban and rural areas have consistently become prone to repetitive attacks of malaria, causing a lot of deaths and significantly increasing the poverty levels of the rural poor. Malaria is a national problem; it causes a lot of maternal pre-natal and antenatal disorders, anemia in pregnant mothers, low birth weights for the newly born, convulsions and epilepsy among the infants. Cumulatively, it kills about one million children every year in sub-Saharan Africa. It has been estimated to account for 25-35% of all outpatient visits, 20-45% of acute hospital admissions and 15-35% of hospital deaths. Uganda is the leading victim country, for which Rakai and Masaka districts are the most affected. So, it is not clear whether these abhorrent situations and episodes of recurrences and failure to cure from the disease are a result of poor diagnosis, prescription and dosing, treatment habits and compliance of the patients to the drugs or the ethical domain of the stake holders in relation to the main stream methodology of malaria management. The research is aimed at offering an alternative approach to manage and deal absolutely with problem by using a knowledge based software model of Artificial Intelligence (Al) that is capable of performing common-sense and cognitive reasoning so as to take decisions like the human brain would do to provide instantaneous expert solutions so as to avoid speculative simulation of the problem during differential diagnosis in the most accurate and literal inferential aspect. This system will assist physicians in many kinds of medical diagnosis, prescribing treatments and doses, and in monitoring patient responses, basing on the body weight and age group of the patient, it will be able to provide instantaneous and timely information options, alternative ways and approaches to influence decision making during case analysis. The computerized system approach, a new model in Uganda termed as “Software Aided Treatment” (SAT) will try to change the moral and ethical approach and influence conduct so as to improve the skills, experience and values (social and ethical) in the administration and management of the disease and drugs (combination therapy and generics) by both the patient and the health worker.Keywords: knowledge based software, management, treatment, diagnosis
Procedia PDF Downloads 5786 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 7885 Barbie in India: A Study of Effects of Barbie in Psychological and Social Health
Authors: Suhrita Saha
Abstract:
Barbie is a fashion doll manufactured by the American toy company Mattel Inc and it made debut at the American International Toy Fair in New York in 9 March 1959. From being a fashion doll to a symbol of fetishistic commodification, Barbie has come a long way. A Barbie doll is sold every three seconds across the world, which makes the billion dollar brand the world’s most popular doll for the girls. The 11.5 inch moulded plastic doll has a height of 5 feet 9 inches at 1/6 scale. Her vital statistics have been estimated at 36 inches (chest), 18 inches (waist) and 33 inches (hips). Her weight is permanently set at 110 pounds which would be 35 pounds underweight. Ruth Handler, the creator of Barbie wanted a doll that represented adulthood and allowed children to imagine themselves as teenagers or adults. While Barbie might have been intended to be independent, imaginative and innovative, the physical uniqueness does not confine the doll to the status of a play thing. It is a cultural icon but with far reaching critical implications. The doll is a commodity bearing more social value than practical use value. The way Barbie is produced represents industrialization and commodification of the process of symbolic production. And this symbolic production and consumption is a standardized planned one that produce stereotypical ‘pseudo-individuality’ and suppresses cultural alternatives. Children are being subject to and also arise as subjects in this consumer context. A very gendered, physiologically dissected sexually charged symbolism is imposed upon children (both male and female), childhood, their social worlds, identity, and relationship formation. Barbie is also very popular among Indian children. While the doll is essentially an imaginative representation of the West, it is internalized by the Indian sensibilities. Through observation and questionnaire-based interview within a sample population of adolescent children (primarily female, a few male) and parents (primarily mothers) in Kolkata, an Indian metropolis, the paper puts forth findings of sociological relevance. 1. Barbie creates, recreates, and accentuates already existing divides between the binaries like male- female, fat- thin, sexy- nonsexy, beauty- brain and more. 2. The Indian girl child in her associative process with Barbie wants to be like her and commodifies her own self. The male child also readily accepts this standardized commodification. Definition of beauty is thus based on prejudice and stereotype. 3. Not being able to become Barbie creates health issues both psychological and physiological varying from anorexia to obesity as well as personality disorder. 4. From being a plaything Barbie becomes the game maker. Barbie along with many other forms of simulation further creates a consumer culture and market for all kind of fitness related hyper enchantment and subsequent disillusionment. The construct becomes the reality and the real gets lost in the play world. The paper would thus argue that Barbie from being an innocuous doll transports itself into becoming social construct with long term and irreversible adverse impact.Keywords: barbie, commodification, personality disorder, sterotype
Procedia PDF Downloads 36484 Analytical and Numerical Modeling of Strongly Rotating Rarefied Gas Flows
Authors: S. Pradhan, V. Kumaran
Abstract:
Centrifugal gas separation processes effect separation by utilizing the difference in the mole fraction in a high speed rotating cylinder caused by the difference in molecular mass, and consequently the centrifugal force density. These have been widely used in isotope separation because chemical separation methods cannot be used to separate isotopes of the same chemical species. More recently, centrifugal separation has also been explored for the separation of gases such as carbon dioxide and methane. The efficiency of separation is critically dependent on the secondary flow generated due to temperature gradients at the cylinder wall or due to inserts, and it is important to formulate accurate models for this secondary flow. The widely used Onsager model for secondary flow is restricted to very long cylinders where the length is large compared to the diameter, the limit of high stratification parameter, where the gas is restricted to a thin layer near the wall of the cylinder, and it assumes that there is no mass difference in the two species while calculating the secondary flow. There are two objectives of the present analysis of the rarefied gas flow in a rotating cylinder. The first is to remove the restriction of high stratification parameter, and to generalize the solutions to low rotation speeds where the stratification parameter may be O (1), and to apply for dissimilar gases considering the difference in molecular mass of the two species. Secondly, we would like to compare the predictions with molecular simulations based on the direct simulation Monte Carlo (DSMC) method for rarefied gas flows, in order to quantify the errors resulting from the approximations at different aspect ratios, Reynolds number and stratification parameter. In this study, we have obtained analytical and numerical solutions for the secondary flows generated at the cylinder curved surface and at the end-caps due to linear wall temperature gradient and external gas inflow/outflow at the axis of the cylinder. The effect of sources of mass, momentum and energy within the flow domain are also analyzed. The results of the analytical solutions are compared with the results of DSMC simulations for three types of forcing, a wall temperature gradient, inflow/outflow of gas along the axis, and mass/momentum input due to inserts within the flow. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. The commonly used diffuse reflection boundary conditions at solid walls in DSMC simulations result in a non-zero slip velocity as well as a temperature slip (gas temperature at the wall is different from wall temperature). These have to be incorporated in the analysis in order to make quantitative predictions. In the case of mass/momentum/energy sources within the flow, it is necessary to ensure that the homogeneous boundary conditions are accurately satisfied in the simulations. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 10 %, even when the stratification parameter is as low as 0.707, the Reynolds number is as low as 100 and the aspect ratio (length/diameter) of the cylinder is as low as 2, and the secondary flow velocity is as high as 0.2 times the maximum base flow velocity.Keywords: rotating flows, generalized onsager and carrier-Maslen model, DSMC simulations, rarefied gas flow
Procedia PDF Downloads 39983 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate
Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori
Abstract:
Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission
Procedia PDF Downloads 7682 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings
Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun
Abstract:
Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building
Procedia PDF Downloads 17381 Temporal Variation of Surface Runoff and Interrill Erosion in Different Soil Textures of a Semi-arid Region, Iran
Authors: Ali Reza Vaezi, Naser Fakori Ivand, Fereshteh Azarifam
Abstract:
Interrill erosion is the detachment and transfer of soil particles between the rills due to the impact of raindrops and the shear stress of shallow surface runoff. This erosion can be affected by some soil properties such as texture, amount of organic matter and stability of soil aggregates. Information on the temporal variation of interrill erosion during a rainfall event and the effect soil properties have on it can help in understanding the process of runoff production and soil loss between the rills in hillslopes. The importance of this study is especially grate in semi-arid regions, where the soil is weakly aggregated and vegetation cover is mostly poor. Therefore, this research was conducted to investigate the temporal variation of surface flow and interrill erosion and the effect of soil properties on it in some semi-arid soils. A field experiment was done in eight different soil textures under simulated rainfalls with uniform intensity. A total of twenty four plots were installed for eight study soils with three replicates in the form of a random complete block design along the land. The plots were 1.2 m (length) × 1 m (width) in dimensions which designed with a distance of 3 m from each other across the slope. Then, soil samples were purred into the plots. The plots were surrounded by a galvanized sheet, and runoff and soil erosion equipment were placed at their outlets. Rainfall simulation experiments were done using a designed portable simulator with an intensity of 60 mm per hour for 60 minutes. A plastic cover was used around the rainfall simulator frame to prevent the impact of the wind on the free fall of water drops. Runoff production and soil loss were measured during 1 hour time with 5-min intervals. In order to study soil properties, such as particle size distribution, aggregate stability, bulk density, ESP and Ks were determined in the laboratory. Correlation and regression analysis was done to determine the effect of soil properties on runoff and interrill erosion. Results indicated that the study soils have lower booth organic matter content and aggregate stability. The soils, except for coarse textured textures, are calcareous and with relatively higher exchangeable sodium percentages (ESP). Runoff production and soil loss didn’t occur in sand, which was associated with higher infiltration and drainage rates. In other study soils, interrill erosion occurred simultaneously with the generation of runoff. A strong relationship was found between interrill erosion and surface runoff (R2 = 0.75, p< 0.01). The correlation analysis showed that surface runoff was significantly affected by some soil properties consisting of sand, silt, clay, bulk density, gravel, hydraulic conductivity (Ks), lime (calcium carbonate), and ESP. The soils with lower Ks such as fine-textured soils, produced higher surface runoff and more interrill erosion. In the soils, Surface runoff production temporally increased during rainfall and finally reached a peak after about 25-35 min. Time to peak was very short (30 min) in fine-textured soils, especially clay, which was related to their lower infiltration rate.Keywords: erosion plot, rainfall simulator, soil properties, surface flow
Procedia PDF Downloads 6980 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 8279 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control
Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol
Abstract:
Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics
Procedia PDF Downloads 21178 Inhibitory Effects of Crocin from Crocus sativus L. on Cell Proliferation of a Medulloblastoma Human Cell Line
Authors: Kyriaki Hatziagapiou, Eleni Kakouri, Konstantinos Bethanis, Alexandra Nikola, Eleni Koniari, Charalabos Kanakis, Elias Christoforides, George Lambrou, Petros Tarantilis
Abstract:
Medulloblastoma is a highly invasive tumour, as it tends to disseminate throughout the central nervous system early in its course. Despite the high 5-year-survival rate, a significant number of patients demonstrate serious long- or short-term sequelae (e.g., myelosuppression, endocrine dysfunction, cardiotoxicity, neurological deficits and cognitive impairment) and higher mortality rates, unrelated to the initial malignancy itself but rather to the aggressive treatment. A strong rationale exists for the use of Crocus sativus L (saffron) and its bioactive constituents (crocin, crocetin, safranal) as pharmaceutical agents, as they exert significant health-promoting properties. Crocins are water soluble carotenoids. Unlike other carotenoids, crocins are highly water-soluble compounds, with relatively low toxicity as they are not stored in adipose and liver tissues. Crocins have attracted wide attention as promising anti-cancer agents, due to their antioxidant, anti-inflammatory, and immunomodulatory effects, interference with transduction pathways implicated in tumorigenesis, angiogenesis, and metastasis (disruption of mitotic spindle assembly, inhibition of DNA topoisomerases, cell-cycle arrest, apoptosis or cell differentiation) and sensitization of cancer cells to radiotherapy and chemotherapy. The current research aimed to study the potential cytotoxic effect of crocins on TE671 medulloblastoma cell line, which may be useful in the optimization of existing and development of new therapeutic strategies. Crocins were extracted from stigmas of saffron in ultrasonic bath, using petroleum-ether, diethylether and methanol 70%v/v as solvents and the final extract was lyophilized. Identification of crocins according to high-performance liquid chromatography (HPLC) analysis was determined comparing the UV-vis spectra and the retention time (tR) of the peaks with literature data. For the biological assays crocin was diluted to nuclease and protease free water. TE671 cells were incubated with a range of concentrations of crocins (16, 8, 4, 2, 1, 0.5 and 0.25 mg/ml) for 24, 48, 72 and 96 hours. Analysis of cell viability after incubation with crocins was performed with Alamar Blue viability assay. The active ingredient of Alamar Blue, resazurin, is a blue, nontoxic, cell permeable compound virtually nonfluorescent. Upon entering cells, resazurin is reduced to a pink and fluorescent molecule, resorufin. Viable cells continuously convert resazurin to resorufin, generating a quantitative measure of viability. The colour of resorufin was quantified by measuring the absorbance of the solution at 600 nm with a spectrophotometer. HPLC analysis indicated that the most abundant crocins in our extract were trans-crocin-4 and trans-crocin-3. Crocins exerted significant cytotoxicity in a dose and time-dependent manner (p < 0.005 for exposed cells to any concentration at 48, 72 and 96 hours versus cells not exposed); as their concentration and time of exposure increased, the reduction of resazurin to resofurin decreased, indicating reduction in cell viability. IC50 values for each time point were calculated ~3.738, 1.725, 0.878 and 0.7566 mg/ml at 24, 48, 72 and 96 hours, respectively. The results of our study could afford the basis of research regarding the use of natural carotenoids as anticancer agents and the shift to targeted therapy with higher efficacy and limited toxicity. Acknowledgements: The research was funded by Fellowships of Excellence for Postgraduate Studies IKY-Siemens Programme.Keywords: crocetin, crocin, medulloblastoma, saffron
Procedia PDF Downloads 21677 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles
Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo
Abstract:
Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.Keywords: HRRP, NCTI, simulated/synthetic database, SVD
Procedia PDF Downloads 35476 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India
Authors: Susmita Ghosh
Abstract:
Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management
Procedia PDF Downloads 20375 Oblique Radiative Solar Nano-Polymer Gel Coating Heat Transfer and Slip Flow: Manufacturing Simulation
Authors: Anwar Beg, Sireetorn Kuharat, Rashid Mehmood, Rabil Tabassum, Meisam Babaie
Abstract:
Nano-polymeric solar paints and sol-gels have emerged as a major new development in solar cell/collector coatings offering significant improvements in durability, anti-corrosion and thermal efficiency. They also exhibit substantial viscosity variation with temperature which can be exploited in solar collector designs. Modern manufacturing processes for such nano-rheological materials frequently employ stagnation flow dynamics under high temperature which invokes radiative heat transfer. Motivated by elaborating in further detail the nanoscale heat, mass and momentum characteristics of such sol gels, the present article presents a mathematical and computational study of the steady, two-dimensional, non-aligned thermo-fluid boundary layer transport of copper metal-doped water-based nano-polymeric sol gels under radiative heat flux. To simulate real nano-polymer boundary interface dynamics, thermal slip is analysed at the wall. A temperature-dependent viscosity is also considered. The Tiwari-Das nanofluid model is deployed which features a volume fraction for the nanoparticle concentration. This approach also features a Maxwell-Garnet model for the nanofluid thermal conductivity. The conservation equations for mass, normal and tangential momentum and energy (heat) are normalized via appropriate transformations to generate a multi-degree, ordinary differential, non-linear, coupled boundary value problem. Numerical solutions are obtained via the stable, efficient Runge-Kutta-Fehlberg scheme with shooting quadrature in MATLAB symbolic software. Validation of solutions is achieved with a Variational Iterative Method (VIM) utilizing Langrangian multipliers. The impact of key emerging dimensionless parameters i.e. obliqueness parameter, radiation-conduction Rosseland number (Rd), thermal slip parameter (α), viscosity parameter (m), nanoparticles volume fraction (ϕ) on non-dimensional normal and tangential velocity components, temperature, wall shear stress, local heat flux and streamline distributions is visualized graphically. Shear stress and temperature are boosted with increasing radiative effect whereas local heat flux is reduced. Increasing wall thermal slip parameter depletes temperatures. With greater volume fraction of copper nanoparticles temperature and thermal boundary layer thickness is elevated. Streamlines are found to be skewed markedly towards the left with positive obliqueness parameter.Keywords: non-orthogonal stagnation-point heat transfer, solar nano-polymer coating, MATLAB numerical quadrature, Variational Iterative Method (VIM)
Procedia PDF Downloads 13674 Virtual Reality Applications for Building Indoor Engineering: Circulation Way-Finding
Authors: Atefeh Omidkhah Kharashtomi, Rasoul Hedayat Nejad, Saeed Bakhtiyari
Abstract:
Circulation paths and indoor connection network of the building play an important role both in the daily operation of the building and during evacuation in emergency situations. The degree of legibility of the paths for navigation inside the building has a deep connection with the perceptive and cognitive system of human, and the way the surrounding environment is being perceived. Human perception of the space is based on the sensory systems in a three-dimensional environment, and non-linearly, so it is necessary to avoid reducing its representations in architectural design as a two-dimensional and linear issue. Today, the advances in the field of virtual reality (VR) technology have led to various applications, and architecture and building science can benefit greatly from these capabilities. Especially in cases where the design solution requires a detailed and complete understanding of the human perception of the environment and the behavioral response, special attention to VR technologies could be a priority. Way-finding in the indoor circulation network is a proper example for such application. Success in way-finding could be achieved if human perception of the route and the behavioral reaction have been considered in advance and reflected in the architectural design. This paper discusses the VR technology applications for the way-finding improvements in indoor engineering of the building. In a systematic review, with a database consisting of numerous studies, firstly, four categories for VR applications for circulation way-finding have been identified: 1) data collection of key parameters, 2) comparison of the effect of each parameter in virtual environment versus real world (in order to improve the design), 3) comparing experiment results in the application of different VR devices/ methods with each other or with the results of building simulation, and 4) training and planning. Since the costs of technical equipment and knowledge required to use VR tools lead to the limitation of its use for all design projects, priority buildings for the use of VR during design are introduced based on case-studies analysis. The results indicate that VR technology provides opportunities for designers to solve complex buildings design challenges in an effective and efficient manner. Then environmental parameters and the architecture of the circulation routes (indicators such as route configuration, topology, signs, structural and non-structural components, etc.) and the characteristics of each (metrics such as dimensions, proportions, color, transparency, texture, etc.) are classified for the VR way-finding experiments. Then, according to human behavior and reaction in the movement-related issues, the necessity of scenario-based and experiment design for using VR technology to improve the design and receive feedback from the test participants has been described. The parameters related to the scenario design are presented in a flowchart in the form of test design, data determination and interpretation, recording results, analysis, errors, validation and reporting. Also, the experiment environment design is discussed for equipment selection according to the scenario, parameters under study as well as creating the sense of illusion in the terms of place illusion, plausibility and illusion of body ownership.Keywords: virtual reality (VR), way-finding, indoor, circulation, design
Procedia PDF Downloads 7573 Research on the Effect of Coal Ash Slag Structure Evolution on Its Flow Behavior During Co-gasification of Coal and Indirect Coal Liquefaction Residue
Authors: Linmin Zhang
Abstract:
Entrained-flow gasification technology is considered the most promising gasification technology because of its clean and efficient utilization characteristics. The stable fluidity of slag at high temperatures is the key to affecting the long-period operation of the gasifier. The diversity and differences of coal ash-slag systems make it difficult to meet the requirements for stable slagging in entrained-flow gasifiers. Therefore, coal blending or adding fluxes has been used in industry for a long time to improve the flow behavior of coal ash. As a by-product of the indirect coal liquefaction process, indirect coal liquefaction residue (ICLR) is a kind of industrial solid waste that is usually disposed of by stacking or landfilling. However, this disposal method will not only occupy land resources but also cause serious pollution to soil and water bodies by leachate containing toxic and harmful metals. As a carbon-containing matrix, ICLR is not only a kind of waste but also a kind of energy substance. Utilizing existing industrial gasifiers to blend combustion ICLR can not only transform industrial solid waste into fuel but also save coal resources. Moreover, the ICLR usually contains a unique ash chemical composition different from coal, which will affect the slagging performance of the gasifier. Therefore, exploring the effect of the ash addition in ICLR on the coal ash flow behavior can not only improve the slagging performance and gasification efficiency of entrained-flow gasifier by using the unique ash chemical composition of ICLR but also provide some theoretical support for the large-scale consumption of industrial solid waste. Combining molecular dynamics simulation with Raman spectroscopy experiment, the effect of ICLR addition on slag structure and fluidity was explained, and the relationship between the evolution law of slag short/medium range microstructure and macroscopic flow behavior was discussed. The research found that the high silicon and aluminum content in coal ash led to the formation of complex [SiO₄]⁴- tetrahedron and [AlO₄]⁵- tetrahedron structures at high temperature, and the [SiO₄]⁴- tetrahedron and [AlO₄]⁵- tetrahedron were connected by oxygen atoms to form a multi-membered ring structure with high polymerization degree. Due to the action of the multi-membered ring structure, the internal friction in the slag increased, and the viscosity value was higher on the macro-level. As a network-modified ion, Fe2+ could replace Si4+ and Al3+ in the multi-membered ring structure and combine with O2-, which will destroy the bridge oxygen (BO) structure and transform more complex tri cluster oxygen (TO) and bridge oxygen (BO) into simple non-bridge oxygen (NBO) structure. As a result, a large number of multi-membered rings with high polymerization degrees were depolymerized into low-membered rings with low polymerization degrees. The evolution of oxygen types and ring structures in slag reduced the structure complexity and polymerization degree of coal ash slag, resulting in a decrease in the viscosity of coal ash slag.Keywords: ash slag, coal gasification, fluidity, industrial solid waste, slag structure
Procedia PDF Downloads 3172 Earthquake Preparedness of School Community and E-PreS Project
Authors: A. Kourou, A. Ioakeimidou, S. Hadjiefthymiades, V. Abramea
Abstract:
During the last decades, the task of engaging governments, communities and citizens to reduce risk and vulnerability of the populations has made variable progress. Experience has demonstrated that lack of awareness, education and preparedness may result in significant material and other losses both on the onset of the disaster. Schools play a vital role in the community and are important elements of values and culture of the society. A proper school education not only teaches children, but also is a key factor in the promotion of a safety culture into the wider community. In Greece School Earthquake Safety Initiative has been undertaken by Earthquake Planning and Protection Ogranization with specific actions (seminars, lectures, guidelines, educational material, campaigns, national or EU projects, drills etc.). The objective of this initiative is to develop disaster-resilient school communities through awareness, self-help, cooperation and education. School preparedness requires the participation of Principals, teachers, students, parents, and competent authorities. Preparation and earthquake readiness involves: a) learning what should be done before, during, and after earthquake; b) doing or preparing to do these things now, before the next earthquake; and c) developing teachers’ and students’ skills to cope efficiently in case of an earthquake. In the above given framework this paper presents the results of a survey aimed to identify the level of education and preparedness of school community in Greece. More specifically, the survey questionnaire investigates issues regarding earthquake protection actions, appropriate attitudes and behaviors during an earthquake and existence of contingency plans at elementary and secondary schools. The questionnaires were administered to Principals and teachers from different regions of the country that attend the EPPO national training project 'Earthquake Safety at Schools'. A closed-form questionnaire was developed for the survey, which contained questions regarding the following: a) knowledge of self protective actions b) existence of emergency planning at home and c) existence of emergency planning at school (hazard mitigation actions, evacuation plan, and performance of drills). Survey results revealed that a high percentage of teachers have taken the appropriate preparedness measures concerning non-structural hazards at schools, emergency school plan and simulation drills every year. In order to improve the action-planning for ongoing school disaster risk reduction, the implementation of earthquake drills, the involvement of students with disabilities and the evaluation of school emergency plans, EPPO participates in E-PreS project. The main objective of this project is to create smart tools which define, simulate and evaluate all hazards emergency steps customized to the unique district and school. The project comes up with a holistic methodology using real-time evaluation involving different categories of actors, districts, steps and metrics. The project is supported by EU Civil Protection Financial Instrument with a duration of two years. Coordinator is the Kapodistrian University of Athens and partners are from four countries; Greece, Italy, Romania and Bulgaria.Keywords: drills, earthquake, emergency plans, E-PreS project
Procedia PDF Downloads 23571 The Temperature Degradation Process of Siloxane Polymeric Coatings
Authors: Andrzej Szewczak
Abstract:
Study of the effect of high temperatures on polymer coatings represents an important field of research of their properties. Polymers, as materials with numerous features (chemical resistance, ease of processing and recycling, corrosion resistance, low density and weight) are currently the most widely used modern building materials, among others in the resin concrete, plastic parts, and hydrophobic coatings. Unfortunately, the polymers have also disadvantages, one of which decides about their usage - low resistance to high temperatures and brittleness. This applies in particular thin and flexible polymeric coatings applied to other materials, such a steel and concrete, which degrade under varying thermal conditions. Research about improvement of this state includes methods of modification of the polymer composition, structure, conditioning conditions, and the polymerization reaction. At present, ways are sought to reflect the actual environmental conditions, in which the coating will be operating after it has been applied to other material. These studies are difficult because of the need for adopting a proper model of the polymer operation and the determination of phenomena occurring at the time of temperature fluctuations. For this reason, alternative methods are being developed, taking into account the rapid modeling and the simulation of the actual operating conditions of polymeric coating’s materials in real conditions. The nature of a duration is typical for the temperature influence in the environment. Studies typically involve the measurement of variation one or more physical and mechanical properties of such coating in time. Based on these results it is possible to determine the effects of temperature loading and develop methods affecting in the improvement of coatings’ properties. This paper contains a description of the stability studies of silicone coatings deposited on the surface of a ceramic brick. The brick’s surface was hydrophobized by two types of inorganic polymers: nano-polymer preparation based on dialkyl siloxanes (Series 1 - 5) and an aqueous solution of the silicon (series 6 - 10). In order to enhance the stability of the film formed on the brick’s surface and immunize it to variable temperature and humidity loading, the nano silica was added to the polymer. The right combination of the polymer liquid phase and the solid phase of nano silica was obtained by disintegration of the mixture by the sonification. The changes of viscosity and surface tension of polymers were defined, which are the basic rheological parameters affecting the state and the durability of the polymer coating. The coatings created on the brick’s surfaces were then subjected to a temperature loading of 100° C and moisture by total immersion in water, in order to determine any water absorption changes caused by damages and the degradation of the polymer film. The effect of moisture and temperature was determined by measurement (at specified number of cycles) of changes in the surface hardness (using a Vickers’ method) and the absorption of individual samples. As a result, on the basis of the obtained results, the degradation process of polymer coatings related to their durability changes in time was determined.Keywords: silicones, siloxanes, surface hardness, temperature, water absorption
Procedia PDF Downloads 24370 Developing Telehealth-Focused Advanced Practice Nurse Educational Partnerships
Authors: Shelley Y. Hawkins
Abstract:
Introduction/Background: As technology has grown exponentially in healthcare, nurse educators must prepare Advanced Practice Registered Nurse (APRN) graduates with the knowledge and skills in information systems/technology to support and improve patient care and health care systems. APRN’s are expected to lead in caring for populations who lack accessibility and availability through the use of technology, specifically telehealth. The capacity to effectively and efficiently use technology in patient care delivery is clearly delineated in the American Association of Colleges of Nursing (AACN) Doctor of Nursing Practice (DNP) and Master of Science in Nursing (MSN) Essentials. However, APRN’s have minimal, or no, exposure to formalized telehealth education and lack necessary technical skills needed to incorporate telehealth into their patient care. APRN’s must successfully master the technology using telehealth/telemedicine, electronic health records, health information technology, and clinical decision support systems to advance health. Furthermore, APRN’s must be prepared to lead the coordination and collaboration with other healthcare providers in their use and application. Aim/Goal/Purpose: The purpose of this presentation is to establish and operationalize telehealth-focused educational partnerships between one University School of Nursing and two health care systems in order to enhance the preparation of APRN NP students for practice, teaching, and/or scholarly endeavors. Methods: The proposed project was initially presented by the project director to selected multidisciplinary stakeholders including leadership, home telehealth personnel, primary care providers, and decision support systems within two major health care systems to garner their support for acceptance and implementation. Concurrently, backing was obtained from key university-affiliated colleagues including the Director of Simulation and Innovative Learning Lab and Coordinator of the Health Care Informatics Program. Technology experts skilled in design and production in web applications and electronic modules were secured from two local based technology companies. Results: Two telehealth-focused APRN Program academic/practice partnerships have been established. Students have opportunities to engage in clinically based telehealth experiences focused on: (1) providing patient care while incorporating various technology with a specific emphasis on telehealth; (2) conducting research and/or evidence-based practice projects in order to further develop the scientific foundation regarding incorporation of telehealth with patient care; and (3) participating in the production of patient-level educational materials related to specific topical areas. Conclusions: Evidence-based APRN student telehealth clinical experiences will assist in preparing graduates who can effectively incorporate telehealth into their clinical practice. Greater access for diverse populations will be available as a result of the telehealth service model as well as better care and better outcomes at lower costs. Furthermore, APRN’s will provide the necessary leadership and coordination through interprofessional practice by transforming health care through new innovative care models using information systems and technology.Keywords: academic/practice partnerships, advanced practice nursing, nursing education, telehealth
Procedia PDF Downloads 24269 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak
Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi
Abstract:
This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak
Procedia PDF Downloads 15468 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 15067 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System
Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar
Abstract:
Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel
Procedia PDF Downloads 13566 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump
Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani
Abstract:
The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump
Procedia PDF Downloads 13765 Permeable Asphalt Pavement as a Measure of Urban Green Infrastructure in the Extreme Events Mitigation
Authors: Márcia Afonso, Cristina Fael, Marisa Dinis-Almeida
Abstract:
Population growth in cities has led to an increase in the infrastructures construction, including buildings and roadways. This aspect leads directly to the soils waterproofing. In turn, changes in precipitation patterns are developing into higher and more frequent intensities. Thus, these two conjugated aspects decrease the rainwater infiltration into soils and increase the volume of surface runoff. The practice of green and sustainable urban solutions has encouraged research in these areas. The porous asphalt pavement, as a green infrastructure, is part of practical solutions set to address urban challenges related to land use and adaptation to climate change. In this field, permeable pavements with porous asphalt mixtures (PA) have several advantages in terms of reducing the runoff generated by the floods. The porous structure of these pavements, compared to a conventional asphalt pavement, allows the rainwater infiltration in the subsoil, and consequently, the water quality improvement. This green infrastructure solution can be applied in cities, particularly in streets or parking lots to mitigate the floods effects. Over the years, the pores of these pavements can be filled by sediment, reducing their function in the rainwater infiltration. Thus, double layer porous asphalt (DLPA) was developed to mitigate the clogging effect and facilitate the water infiltration into the lower layers. This study intends to deepen the knowledge of the performance of DLPA when subjected to clogging. The experimental methodology consisted on four evaluation phases of the DLPA infiltration capacity submitted to three precipitation events (100, 200 and 300 mm/h) in each phase. The evaluation first phase determined the behavior after DLPA construction. In phases two and three, two 500 g/m2 clogging cycles were performed, totaling a 1000 g/m2 final simulation. Sand with gradation accented in fine particles was used as clogging material. In the last phase, the DLPA was subjected to simple sweeping and vacuuming maintenance. A precipitation simulator, type sprinkler, capable of simulating the real precipitation was developed for this purpose. The main conclusions show that the DLPA has the capacity to drain the water, even after two clogging cycles. The infiltration results of flows lead to an efficient performance of the DPLA in the surface runoff attenuation, since this was not observed in any of the evaluation phases, even at intensities of 200 and 300 mm/h, simulating intense precipitation events. The infiltration capacity under clogging conditions decreased about 7% on average in the three intensities relative to the initial performance that is after construction. However, this was restored when subjected to simple maintenance, recovering the DLPA hydraulic functionality. In summary, the study proved the efficacy of using a DLPA when it retains thicker surface sediments and limits the fine sediments entry to the remaining layers. At the same time, it is guaranteed the rainwater infiltration and the surface runoff reduction and is therefore a viable solution to put into practice in permeable pavements.Keywords: clogging, double layer porous asphalt, infiltration capacity, rainfall intensity
Procedia PDF Downloads 49264 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy
Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini
Abstract:
The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering
Procedia PDF Downloads 22463 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology
Authors: Amarendar Reddy Addula
Abstract:
Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.Keywords: artificial intelligence, ethics & human rights issues, laws, international laws
Procedia PDF Downloads 9662 Toward the Decarbonisation of EU Transport Sector: Impacts and Challenges of the Diffusion of Electric Vehicles
Authors: Francesca Fermi, Paola Astegiano, Angelo Martino, Stephanie Heitel, Michael Krail
Abstract:
In order to achieve the targeted emission reductions for the decarbonisation of the European economy by 2050, fundamental contributions are required from both energy and transport sectors. The objective of this paper is to analyse the impacts of a largescale diffusion of e-vehicles, either battery-based or fuel cells, together with the implementation of transport policies aiming at decreasing the use of motorised private modes in order to achieve greenhouse gas emission reduction goals, in the context of a future high share of renewable energy. The analysis of the impacts and challenges of future scenarios on transport sector is performed with the ASTRA (ASsessment of TRAnsport Strategies) model. ASTRA is a strategic system-dynamic model at European scale (EU28 countries, Switzerland and Norway), consisting of different sub-modules related to specific aspects: the transport system (e.g. passenger trips, tonnes moved), the vehicle fleet (composition and evolution of technologies), the demographic system, the economic system, the environmental system (energy consumption, emissions). A key feature of ASTRA is that the modules are linked together: changes in one system are transmitted to other systems and can feed-back to the original source of variation. Thanks to its multidimensional structure, ASTRA is capable to simulate a wide range of impacts stemming from the application of transport policy measures: the model addresses direct impacts as well as second-level and third-level impacts. The simulation of the different scenarios is performed within the REFLEX project, where the ASTRA model is employed in combination with several energy models in a comprehensive Modelling System. From the transport sector perspective, some of the impacts are driven by the trend of electricity price estimated from the energy modelling system. Nevertheless, the major drivers to a low carbon transport sector are policies related to increased fuel efficiency of conventional drivetrain technologies, improvement of demand management (e.g. increase of public transport and car sharing services/usage) and diffusion of environmentally friendly vehicles (e.g. electric vehicles). The final modelling results of the REFLEX project will be available from October 2018. The analysis of the impacts and challenges of future scenarios is performed in terms of transport, environmental and social indicators. The diffusion of e-vehicles produces a consistent reduction of future greenhouse gas emissions, although the decarbonisation target can be achieved only with the contribution of complementary transport policies on demand management and supporting the deployment of low-emission alternative energy for non-road transport modes. The paper explores the implications through time of transport policy measures on mobility and environment, underlying to what extent they can contribute to a decarbonisation of the transport sector. Acknowledgements: The results refer to the REFLEX project which has received grants from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 691685.Keywords: decarbonisation, greenhouse gas emissions, e-mobility, transport policies, energy
Procedia PDF Downloads 15461 Refurbishment Methods to Enhance Energy Efficiency of Brick Veneer Residential Buildings in Victoria
Authors: Hamid Reza Tabatabaiefar, Bita Mansoury, Mohammad Javad Khadivi Zand
Abstract:
The current energy and climate change impacts of the residential building sector in Australia are significant. Thus, the Australian Government has introduced more stringent regulations to improve building energy efficiency. In 2006, the Australian residential building sector consumed about 11% (around 440 Petajoule) of the total primary energy, resulting in total greenhouse gas emissions of 9.65 million tonnes CO2-eq. The gas and electricity consumption of residential dwellings contributed to 30% and 52% respectively, of the total primary energy utilised by this sector. Around 40 percent of total energy consumption of Australian buildings goes to heating and cooling due to the low thermal performance of the buildings. Thermal performance of buildings determines the amount of energy used for heating and cooling of the buildings which profoundly influences energy efficiency. Employing sustainable design principles and effective use of construction materials can play a crucial role in improving thermal performance of new and existing buildings. Even though awareness has been raised, the design phase of refurbishment projects is often problematic. One of the issues concerning the refurbishment of residential buildings is mostly the consumer market, where most work consists of moderate refurbishment jobs, often without assistance of an architect and partly without a building permit. There is an individual and often fragmental approach that results in lack of efficiency. Most importantly, the decisions taken in the early stages of the design determine the final result; however, the assessment of the environmental performance only happens at the end of the design process, as a reflection of the design outcome. Finally, studies have identified the lack of knowledge, experience and best-practice examples as barriers in refurbishment projects. In the context of sustainable development and the need to reduce energy demand, refurbishing the ageing residential building constitutes a necessary action. Not only it does provide huge potential for energy savings, but it is also economically and socially relevant. Although the advantages have been identified, the guidelines come in the form of general suggestions that fail to address the diversity of each project. As a result, it has been recognised that there is a strong need to develop guidelines for optimised retrofitting of existing residential buildings in order to improve their energy performance. The current study investigates the effectiveness of different energy retrofitting techniques and examines the impact of employing those methods on energy consumption of residential brick veneer buildings in Victoria (Australia). Proposing different remedial solutions for improving the energy performance of residential brick veneer buildings, in the simulation stage, annual energy usage analyses have been carried out to determine heating and cooling energy consumptions of the buildings for different proposed retrofitting techniques. Then, the results of employing different retrofitting methods have been examined and compared in order to identify the most efficient and cost-effective remedial solution for improving the energy performance of those buildings with respect to the climate condition in Victoria and construction materials of the studied benchmark building.Keywords: brick veneer residential buildings, building energy efficiency, climate change impacts, cost effective remedial solution, energy performance, sustainable design principles
Procedia PDF Downloads 29460 From Faces to Feelings: Exploring Emotional Contagion and Empathic Accuracy through the Enfacement Illusion
Authors: Ilenia Lanni, Claudia Del Gatto, Allegra Indraccolo, Riccardo Brunetti
Abstract:
Empathy represents a multifaceted construct encompassing affective and cognitive components. Among these, empathic accuracy—defined as the ability to accurately infer another person’s emotions or mental state—plays a pivotal role in fostering empathetic understanding. Emotional contagion, the automatic process through which individuals mimic and synchronize facial expressions, vocalizations, and postures, is considered a foundational mechanism for empathy. This embodied simulation enables shared emotional experiences and facilitates the recognition of others’ emotional states, forming the basis of empathic accuracy. Facial mimicry, an integral part of emotional contagion, creates a physical and emotional resonance with others, underscoring its potential role in enhancing empathic understanding. Building on these findings, the present study explores how manipulating emotional contagion through the enfacement illusion impacts empathic accuracy, particularly in the recognition of complex emotional expressions. The enfacement illusion was implemented as a visuo-tactile multisensory manipulation, during which participants experienced synchronous and spatially congruent tactile stimulation on their own face while observing the same stimulation being applied to another person’s face. This manipulation enhances facial mimicry, which is hypothesized to play a key role in improving empathic accuracy. Following the enfacement illusion, participants completed a modified version of the Diagnostic Analysis of Nonverbal Accuracy–Form 2 (DANVA2-AF). The task included 48 images of adult faces expressing happiness, sadness, or morphed emotions blending neutral with happiness or sadness to increase recognition difficulty. These images featured both familiar and unfamiliar faces, with familiar faces belonging to the actors involved in the prior visuo-tactile stimulation. Participants were required to identify the target’s emotional state as either "happy" or "sad," with response accuracy and reaction times recorded. Results from this study indicate that emotional contagion, as manipulated through the enfacement illusion, significantly enhances empathic accuracy, particularly for the recognition of happiness. Participants demonstrated greater accuracy and faster response times in identifying happiness when viewing familiar faces compared to unfamiliar ones. These findings suggest that the enfacement illusion strengthens emotional resonance and facilitates the processing of positive emotions, which are inherently more likely to be shared and mimicked. Conversely, for the recognition of sadness, an opposite but non-significant trend was observed. Specifically, participants were slightly faster at recognizing sadness in unfamiliar faces compared to familiar ones. This pattern suggests potential differences in how positive and negative emotions are processed within the context of facial mimicry and emotional contagion, warranting further investigation. These results provide insights into the role of facial mimicry in emotional contagion and its selective impact on empathic accuracy. This study highlights how the enfacement illusion can precisely modulate the recognition of specific emotions, offering a deeper understanding of the mechanisms underlying empathy.Keywords: empathy, emotional contagion, enfacement illusion, emotion recognition
Procedia PDF Downloads 12