Search results for: electrical measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4554

Search results for: electrical measurement

534 Electronics Thermal Management Driven Design of an IP65-Rated Motor Inverter

Authors: Sachin Kamble, Raghothama Anekal, Shivakumar Bhavi

Abstract:

Thermal management of electronic components packaged inside an IP65 rated enclosure is of prime importance in industrial applications. Electrical enclosure protects the multiple board configurations such as inverter, power, controller board components, busbars, and various power dissipating components from harsh environments. Industrial environments often experience relatively warm ambient conditions, and the electronic components housed in the enclosure dissipate heat, due to which the enclosures and the components require thermal management as well as reduction of internal ambient temperatures. Design of Experiments based thermal simulation approach with MOSFET arrangement, Heat sink design, Enclosure Volume, Copper and Aluminum Spreader, Power density, and Printed Circuit Board (PCB) type were considered to optimize air temperature inside the IP65 enclosure to ensure conducive operating temperature for controller board and electronic components through the different modes of heat transfer viz. conduction, natural convection and radiation using Ansys ICEPAK. MOSFET’s with the parallel arrangement, IP65 enclosure molded heat sink with rectangular fins on both enclosures, specific enclosure volume to satisfy the power density, Copper spreader to conduct heat to the enclosure, optimized power density value and selecting Aluminum clad PCB which improves the heat transfer were the contributors towards achieving a conducive operating temperature inside the IP-65 rated Motor Inverter enclosure. A reduction of 52 ℃ was achieved in internal ambient temperature inside the IP65 enclosure between baseline and final design parameters, which met the operative temperature requirements of the electronic components inside the IP-65 rated Motor Inverter.

Keywords: Ansys ICEPAK, aluminium clad PCB, IP 65 enclosure, motor inverter, thermal simulation

Procedia PDF Downloads 121
533 Designing a Socio-Technical System for Groundwater Resources Management, Applying Smart Energy and Water Meter

Authors: S. Mahdi Sadatmansouri, Maryam Khalili

Abstract:

World, nowadays, encounters serious water scarcity problem. During the past few years, by advent of Smart Energy and Water Meter (SEWM) and its installation at the electro-pumps of the water wells, one had believed that it could be the golden key to address the groundwater resources over-pumping issue. In fact, implementation of these Smart Meters managed to control the water table drawdown for short; but it was not a sustainable approach. SEWM has been considered as law enforcement facility at first; however, for solving a complex socioeconomic problem like shared groundwater resources management, more than just enforcement is required: participation to conserve common resources. The well owners or farmers, as water consumers, are the main and direct stakeholders of this system and other stakeholders could be government sectors, investors, technology providers, privet sectors or ordinary people. Designing a socio-technical system not only defines the role of each stakeholder but also can lubricate the communication to reach the system goals while benefits of each are considered and provided. Farmers, as the key participators for solving groundwater problem, do not trust governments but they would trust a fair system in which responsibilities, privileges and benefits are clear. Technology could help this system remained impartial and productive. Social aspects provide rules, regulations, social objects and etc. for the system and help it to be more human-centered. As the design methodology, Design Thinking provides probable solutions for the challenging problems and ongoing conflicts; it could enlighten the way in which the final system could be designed. Using Human Centered Design approach of IDEO helps to keep farmers in the center of the solution and provides a vision by which stakeholders’ requirements and needs are addressed effectively. Farmers would be considered to trust the system and participate in their groundwater resources management if they find the rules and tools of the system fair and effective. Besides, implementation of the socio-technical system could change farmers’ behavior in order that they concern more about their valuable shared water resources as well as their farm profit. This socio-technical system contains nine main subsystems: 1) Measurement and Monitoring system, 2) Legislation and Governmental system, 3) Information Sharing system, 4) Knowledge based NGOs, 5) Integrated Farm Management system (using IoT), 6) Water Market and Water Banking system, 7) Gamification, 8) Agribusiness ecosystem, 9) Investment system.

Keywords: human centered design, participatory management, smart energy and water meter (SEWM), social object, socio-technical system, water table drawdown

Procedia PDF Downloads 294
532 Communication Skills Training in Continuing Nursing Education: Enabling Nurses to Improve Competency and Performance in Communication

Authors: Marzieh Moattari Mitra Abbasi, Masoud Mousavinasab, Poorahmad

Abstract:

Background: Nurses in their daily practice need to communicate with patients and their families as well as health professional team members. Effective communication contributes to patients’ satisfaction which is a fundamental outcome of nursing practice. There are some evidences in support of patients' dissatisfaction with nurses’ performance in communication process. Therefore improving nurses’ communication skills is a necessity for nursing scholars and nursing administrators. Objective: The aim of the present study was to evaluate the effect of a 2-days workshop on nurses’ competencies and performances in communication in a central hospital located in the sought of Iran. Materials and Method: This is a randomized controlled trial which comprised of a convenient sample of 70 eligible nurses, working in a central hospital. They were randomly divided into 2 experimental and control groups. Nurses’ competencies was measured by an Objective Structured Clinical Examination (OSCE) and their performance was measured by asking eligible patients hospitalized in the nurses work setting during a one month period to evaluate nurses' communication skills before and 2 months after intervention. The experimental group participated in a 2 day workshop on communication skills. Content included in this workshop were: the importance of communication (verbal and non verbal), basic communication skills such as initiating the communication, active listening and questioning technique. Other subjects were patient teaching, problem solving, and decision making, cross cultural communication and breaking bad news. Appropriate teaching strategies such as brief didactic sessions, small group discussion and reflection were applied to enhance participants learning. The data was analyzed using SPSS 16. Result: A significant between group differences was found in nurses’ communication skills competencies and performances in the posttest. The mean scores of the experimental group was higher than that of the control group in the total score of OSCE as well as all stations of OSCE (p<0.003). Overall posttest mean scores of patient satisfaction with nurse's communication skills and all of its four dimensions significantly differed between the two groups of the study (p<0.001). Conclusion: This study shows that the education of nurses in communication skills, improves their competencies and performances. Measurement of Nurses’ communication skills as a central component of efficient nurse patient relationship by valid and reliable methods of evaluation is recommended. Also it is necessary to integrate teaching of communication skills in continuing nursing education programs. Trial Registration Number: IRCT201204042621N11

Keywords: communication skills, simulation, performance, competency, objective structure, clinical evaluation

Procedia PDF Downloads 218
531 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units

Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro

Abstract:

In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.

Keywords: capacitated clustering, k-means, genetic algorithm, districting problems

Procedia PDF Downloads 198
530 Currently Use Pesticides: Fate, Availability, and Effects in Soils

Authors: Lucie Bielská, Lucia Škulcová, Martina Hvězdová, Jakub Hofman, Zdeněk Šimek

Abstract:

The currently used pesticides represent a broad group of chemicals with various physicochemical and environmental properties which input has reached 2×106 tons/year and is expected to even increases. From that amount, only 1% directly interacts with the target organism while the rest represents a potential risk to the environment and human health. Despite being authorized and approved for field applications, the effects of pesticides in the environment can differ from the model scenarios due to the various pesticide-soil interactions and resulting modified fate and behavior. As such, a direct monitoring of pesticide residues and evaluation of their impact on soil biota, aquatic environment, food contamination, and human health should be performed to prevent environmental and economic damages. The present project focuses on fluvisols as they are intensively used in the agriculture but face to several environmental stressors. Fluvisols develop in the vicinity of rivers by the periodic settling of alluvial sediments and periodic interruptions to pedogenesis by flooding. As a result, fluvisols exhibit very high yields per area unit, are intensively used and loaded by pesticides. Regarding the floods, their regular contacts with surface water arise from serious concerns about the surface water contamination. In order to monitor pesticide residues and assess their environmental and biological impact within this project, 70 fluvisols were sampled over the Czech Republic and analyzed for the total and bioaccessible amounts of 40 various pesticides. For that purpose, methodologies for the pesticide extraction and analysis with liquid chromatography-mass spectrometry technique were developed and optimized. To assess the biological risks, both the earthworm bioaccumulation tests and various types of passive sampling techniques (XAD resin, Chemcatcher, and silicon rubber) were optimized and applied. These data on chemical analysis and bioavailability were combined with the results of soil analysis, including the measurement of basic physicochemical soil properties as well detailed characterization of soil organic matter with the advanced method of diffuse reflectance infrared spectrometry. The results provide unique data on the residual levels of pesticides in the Czech Republic and on the factors responsible for increased pesticide residue levels that should be included in the modeling of pesticide fate and effects.

Keywords: currently used pesticides, fluvisoils, bioavailability, Quechers, liquid-chromatography-mass spectrometry, soil properties, DRIFT analysis, pesticides

Procedia PDF Downloads 463
529 The Relations between Coping Strategies, Caregiver Bonding, and Dating Violence of Emerging Adults: Cross-Cultural Comparison between China and Turkiye

Authors: Zubaidan Yushan, Hudayar Cıhan

Abstract:

Turkiye and China are countries that have collective cultures, but they have different cultural backgrounds. They have different cultures, different religions, and different levels of economic development. The aim of this study is to test the moderation effect of caregiver bonding on the relationship between dating violence and coping strategies among unmarried emerging adults in China and Turkiye. Participants ages were 19 and 26 years (X=23.66, SD=3.66), unmarried emerging adults Turkish 171 participants (72.5% women, 24% men, 3.5% prefer not to say), Chinese 170 participants (71.8% women, 21.8% men, 6.5% prefer not to say). All participants had been in a relationship for more than six months. Participants completed The Conflict Tactics Scales—(CTS2), The Cope Inventory, and The Parental Bonding Instrument (PBI). Examining the dating violence and coping strategies of the participant's relationship through caregiver bonding moderation analysis was performed using the Jamovi. Significance was tested using the bootstrapping method with bias-corrected confidence estimates. The outcome variable for analysis was dating violence, and the predictor variable for the analysis was coping strategies. The moderator variable evaluated for the analysis was parent attachment. Before the analysis, the mean-centered scores of each variable and moderator were calculated. Furthermore, the moderation analysis was conducted separately for each outcome. The Moderation analysis results show that the sub-dimension of over-protection moderates psychological aggression perpetration and avoidance coping in China. The sub-dimension of care moderates injury victimization and avoidance management in Turkiye; also, over-protection moderates injury victimization and social support coping. Moreover, the sub-dimension of care moderates sexual coercion perpetration and avoidance coping. In the results, caregiver bonding moderates the relationship between coping strategies and dating violence, which may be explained by the fact that our ways of coping with problems are learned, and people are influenced by their parents when they face problems. Therefore, problem-solving is permanently fixed, and each person has his or her fixed solution, which leads to a habit of using solutions to problems. However, sometimes, these solutions become the justification for the injured or abusive person. The quality of the attachment between parents can regulate this state. The results are somewhat similar to and slightly different from those in the previous literature. These mixed results indicate the need for further exploration. Many other factors, such as alcohol, drug violence, and pathological problems, maybe the reasons for these differences. In addition, diverse factors such as the study environment and the applied measurement scales may also affect the results.

Keywords: caregiver bonding, coping strategies, dating violence, emerging adulthood, cross-cultural, comparison

Procedia PDF Downloads 55
528 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval

Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle

Abstract:

Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.

Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval

Procedia PDF Downloads 131
527 Investigation of Electrochemical, Morphological, Rheological and Mechanical Properties of Nano-Layered Graphene/Zinc Nanoparticles Incorporated Cold Galvanizing Compound at Reduced Pigment Volume Concentration

Authors: Muhammad Abid

Abstract:

The ultimate goal of this research was to produce a cold galvanizing compound (CGC) at reduced pigment volume concentration (PVC) to protect metallic structures from corrosion. The influence of the partial replacement of Zn dust by nano-layered graphene (NGr) and Zn metal nanoparticles on the electrochemical, morphological, rheological, and mechanical properties of CGC was investigated. EIS was used to explore the electrochemical nature of coatings. The EIS results revealed that the partial replacement of Zn by NGr and Zn nanoparticles enhanced the cathodic protection at reduced PVC (4:1) by improving the electrical contact between the Zn particles and the metal substrate. The Tafel scan was conducted to support the cathodic behaviour of the coatings. The sample formulated solely with Zn at PVC 4:1 was found to be dominated in physical barrier characteristics over cathodic protection. By increasing the concentration of NGr in the formulation, the corrosion potential shifted towards a more negative side. The coating with 1.5% NGr showed the highest galvanic action at reduced PVC. FE-SEM confirmed the interconnected network of conducting particles. The coating without NGr and Zn nanoparticles at PVC 4:1 showed significant gaps between the Zn dust particles. The novelty was evidenced when micrographs showed the consistent distribution of NGr and Zn nanoparticles all over the surface, which acted as a bridge between spherical Zn particles and provided cathodic protection at a reduced PVC. The layered structure of graphene also improved the physical shielding effect of the coatings, which limited the diffusion of electrolytes and corrosion products (oxides/hydroxides) into the coatings, which was reflected by the salt spray test. The rheological properties of coatings showed good liquid/fluid properties. All the coatings showed excellent adhesion but had different strength values. A real-time scratch resistance assessment showed all the coatings had good scratch resistance.

Keywords: protective coatings, anti-corrosion, galvanization, graphene, nanomaterials, polymers

Procedia PDF Downloads 96
526 The Comparative Electroencephalogram Study: Children with Autistic Spectrum Disorder and Healthy Children Evaluate Classical Music in Different Ways

Authors: Galina Portnova, Kseniya Gladun

Abstract:

In our EEG experiment participated 27 children with ASD with the average age of 6.13 years and the average score for CARS 32.41 and 25 healthy children (of 6.35 years). Six types of musical stimulation were presented, included Gluck, Javier-Naida, Kenny G, Chopin and other classic musical compositions. Children with autism showed orientation reaction to the music and give behavioral responses to different types of music, some of them might assess stimulation by scales. The participants were instructed to remain calm. Brain electrical activity was recorded using a 19-channel EEG recording device, 'Encephalan' (Russia, Taganrog). EEG epochs lasting 150 s were analyzed using EEGLab plugin for MatLab (Mathwork Inc.). For EEG analysis we used Fast Fourier Transform (FFT), analyzed Peak alpha frequency (PAF), correlation dimension D2 and Stability of rhythms. To express the dynamics of desynchronizing of different rhythms we've calculated the envelope of the EEG signal, using the whole frequency range and a set of small narrowband filters using Hilbert transformation. Our data showed that healthy children showed similar EEG spectral changes during musical stimulation as well as described the feelings induced by musical fragments. The exception was the ‘Chopin. Prelude’ fragment (no.6). This musical fragment induced different subjective feeling, behavioral reactions and EEG spectral changes in children with ASD and healthy children. The correlation dimension D2 was significantly lower in autists compared to healthy children during musical stimulation. Hilbert envelope frequency was reduced in all group of subjects during musical compositions 1,3,5,6 compositions compared to the background. During musical fragments 2 and 4 (terrible) lower Hilbert envelope frequency was observed only in children with ASD and correlated with the severity of the disease. Alfa peak frequency was lower compared to the background during this musical composition in healthy children and conversely higher in children with ASD.

Keywords: electroencephalogram (EEG), emotional perception, ASD, musical perception, childhood Autism rating scale (CARS)

Procedia PDF Downloads 284
525 Assessing the Blood-Brain Barrier (BBB) Permeability in PEA-15 Mutant Cat Brain using Magnetization Transfer (MT) Effect at 7T

Authors: Sultan Z. Mahmud, Emily C. Graff, Adil Bashir

Abstract:

Phosphoprotein enriched in astrocytes 15 kDa (PEA-15) is a multifunctional adapter protein which is associated with the regulation of apoptotic cell death. Recently it has been discovered that PEA-15 is crucial in normal neurodevelopment of domestic cats, a gyrencephalic animal model, although the exact function of PEA-15 in neurodevelopment is unknown. This study investigates how PEA-15 affects the blood-brain barrier (BBB) permeability in cat brain, which can cause abnormalities in tissue metabolite and energy supplies. Severe polymicrogyria and microcephaly have been observed in cats with a loss of function PEA-15 mutation, affecting the normal neurodevelopment of the cat. This suggests that the vital role of PEA-15 in neurodevelopment is associated with gyrification. Neurodevelopment is a highly energy demanding process. The mammalian brain depends on glucose as its main energy source. PEA-15 plays a very important role in glucose uptake and utilization by interacting with phospholipase D1 (PLD1). Mitochondria also plays a critical role in bioenergetics and essential to supply adequate energy needed for neurodevelopment. Cerebral blood flow regulates adequate metabolite supply and recent findings also showed that blood plasma contains mitochondria as well. So the BBB can play a very important role in regulating metabolite and energy supply in the brain. In this study the blood-brain permeability in cat brain was measured using MRI magnetization transfer (MT) effect on the perfusion signal. Perfusion is the tissue mass normalized supply of blood to the capillary bed. Perfusion also accommodates the supply of oxygen and other metabolites to the tissue. A fraction of the arterial blood can diffuse to the tissue, which depends on the BBB permeability. This fraction is known as water extraction fraction (EF). MT is a process of saturating the macromolecules, which has an effect on the blood that has been diffused into the tissue while having minimal effect on intravascular blood water that has not been exchanged with the tissue. Measurement of perfusion signal with and without MT enables to estimate the microvascular blood flow, EF and permeability surface area product (PS) in the brain. All the experiments were performed with Siemens 7T Magnetom with 32 channel head coil. Three control cats and three PEA-15 mutant cats were used for the study. Average EF in white and gray matter was 0.9±0.1 and 0.86±0.15 respectively, perfusion in white and gray matter was 85±15 mL/100g/min and 97±20 mL/100g/min respectively, PS in white and gray matter was 201±25 mL/100g/min and 225±35 mL/100g/min respectively for control cats. For PEA-15 mutant cats, average EF in white and gray matter was 0.81±0.15 and 0.77±0.2 respectively, perfusion in white and gray matter was 140±25 mL/100g/min and 165±18 mL/100g/min respectively, PS in white and gray matter was 240±30 mL/100g/min and 259±21 mL/100g/min respectively. This results show that BBB is compromised in PEA-15 mutant cat brain, where EF is decreased and perfusion as well as PS are increased in the mutant cats compared to the control cats. This findings might further explain the function of PEA-15 in neurodevelopment.

Keywords: BBB, cat brain, magnetization transfer, PEA-15

Procedia PDF Downloads 143
524 Cost Effective Microfabrication Technique for Lab on Chip (LOC) Devices Using Epoxy Polymers

Authors: Charmi Chande, Ravindra Phadke

Abstract:

Microfluidics devices are fabricated by using multiple fabrication methods. Photolithography is one of the common methods wherein SU8 is widely used for making master which in turn is used for making working chip by the process of soft lithography. The high-aspect ratio features of SU-8 makes it suitable to be used as micro moulds for injection moulding, hot embossing, and moulds to form polydimethylsiloxane (PDMS) structures for bioMEMS (Microelectromechanical systems) applications. But due to high cost, difficulty in procuring and need for clean room, restricts the use of this polymer especially in developing countries and small research labs. ‘Bisphenol –A’ based polymers in mixture with curing agent are used in various industries like Paints and coatings, Adhesives, Electrical systems and electronics, Industrial tooling and composites. We present the novel use of ‘Bisphenol – A’ based polymer in fabricating micro channels for Lab On Chip(LOC) devices. The present paper describes the prototype for production of microfluidics chips using range of ‘Bisphenol-A’ based polymers viz. GY 250, ATUL B11, DER 331, DER 330 in mixture with cationic photo initiators. All the steps of chip production were carried out using an inexpensive approach that uses low cost chemicals and equipment. This even excludes the need of clean room. The produced chips using all above mentioned polymers were validated with respect to height and the chip giving least height was selected for further experimentation. The lowest height achieved was 7 micrometers by GY250. The cost of the master fabricated was $ 0.20 and working chip was $. 0.22. The best working chip was used for morphological identification and profiling of microorganisms from environmental samples like soil, marine water and salt water pan sites. The current chip can be adapted for various microbiological screening experiments like biochemical based microbial identification, studying uncultivable microorganisms at single cell/community level.

Keywords: bisphenol–A based epoxy, cationic photoinitiators, microfabrication, photolithography

Procedia PDF Downloads 286
523 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 169
522 Assessment of Spatial and Temporal Variations of Some Biological Water Quality Parameters in Mat River, Albania

Authors: Etleva Hamzaraj, Eva Kica, Anila Paparisto, Pranvera Lazo

Abstract:

Worldwide demographic developments of recent decades have been associated with negative environmental consequences. For this reason, there is a growing interest in assessing the state of natural ecosystems or assessing human impact on them. In this respect, this study aims to evaluate the change in water quality of the Mat River for a period of about ten years to highlight human impact. In one year, period of study, several biological and environmental parameters are determined to evaluate river water quality, and the data collected are compared with those of a similar study in 2007. Samples are collected every month in five stations evenly distributed along the river. Total coliform bacteria, the number of heterotrophic bacteria in water, and benthic macroinvertebrates are used as biological parameters of water quality. The most probable number index is used for evaluation of total coliform bacteria in water, while the number of heterotrophic bacteria is determined by counting colonies on plates with Plate Count Agar, cultivated with 0.1 ml sample after series dilutions. Benthic macroinvertebrates are analyzed by the number of individuals per taxa, the value of biotic index, EPT Richness Index value and tolerance value. Environmental parameters like pH, temperature, and electrical conductivity are measured onsite. As expected, the bacterial load was higher near urban areas, and the pollution increased with the course of the river. The maximum concentration of fecal coliforms was 1100 MPN/100 ml in summer and near the most urbanized area of the river. The data collected during this study show that after about ten years, there is a change in water quality of Mat River. According to a similar study carried out in 2007, the water of Mat River was of ‘excellent’ quality. But, according to this study, the water was classified as of ‘excellent’ quality only in one sampling site, near river source, while in all other stations was of ‘good’ quality. This result is based on biological and environmental parameters measured. The human impact on the quality of water of Mat River is more than evident.

Keywords: water quality, coliform bacteria, MPN index, benthic macroinvertebrates, biotic index

Procedia PDF Downloads 118
521 Evaluation of Molasses and Sucrose as Cabohydrate Sources for Biofloc System on Nile Tilapia (Oreochromis niloticus) Performances

Authors: A. M. Nour, M. A. Zaki, E. A. Omer, Nourhan Mohamed

Abstract:

Performances of mixed-sex Nile tilapia (Oreochromis niloticus) fingerlings (11.33 ± 1.78 g /fish) reared under biofloc system developed by molasses and sucrose as carbon sources in indoor fiberglass tanks were evaluated. Six indoor fiberglass tanks (1m 3 each filled with 1000 l of underground fresh water), each was stocked with 2kg fish were used for 14 weeks experimental period. Three experimental groups were designed (each group 2 tanks) as following: 1-control: 20% daily without biofloc, 2-zero water exchange rate with biofloc (molasses as C source) and 3-zero water exchange rate with biofloc (sucrose as C source). Fish in all aquariums were fed on floating feed pellets (30% crude protein, 3 mm in diameter) at a rate of 3% of the actual live fish body, 3 times daily and 6 days a week. Carbohydrate supplementations were applied daily to each tank two hrs, after feeding to maintain the carbon: nitrogen ratio (C: N) ratio 20:1. Fish were reared under continuous aeration by pumping air into the water in the tank bottom using two sandy diffusers and constant temperature between 27.0-28.0 ºC by using electrical heaters for 10 weeks. Criteria's for assessment of water quality parameters, biofloc production and fish growth performances were collected and evaluated. The results showed that total ammonia nitrogen in control group was higher than biofloc groups. The biofloc volumes were 19.13 mg/l and 13.96 mg/l for sucrose and molasses, respectively. Biofloc protein (%), ether extract (%) and gross energy (kcal/100g DM), they were higher in biofloc molasses group than biofloc sucrose group. Tilapia growth performances were significantly higher (P < 0.05) with molasses group than in sucrose and control groups, respectively. The highest feed and nutrient utilization values for protein efficiency ratio (PER), protein productive (PPV%) and energy utilization (EU, %) were higher in molasses group followed by sucrose group and control group respectively.

Keywords: biofloc, Nile tilapia, cabohydrates, performances

Procedia PDF Downloads 192
520 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 249
519 Electromagnetic-Mechanical Stimulation on PC12 for Enhancement of Nerve Axonal Extension

Authors: E. Nakamachi, K. Matsumoto, K. Yamamoto, Y. Morita, H. Sakamoto

Abstract:

In recently, electromagnetic and mechanical stimulations have been recognized as the effective extracellular environment stimulation technique to enhance the defected peripheral nerve tissue regeneration. In this study, we developed a new hybrid bioreactor by adopting 50 Hz uniform alternative current (AC) magnetic stimulation and 4% strain mechanical stimulation. The guide tube for nerve regeneration is mesh structured tube made of biodegradable polymer, such as polylatic acid (PLA). However, when neural damage is large, there is a possibility that peripheral nerve undergoes necrosis. So it is quite important to accelerate the nerve tissue regeneration by achieving enhancement of nerve axonal extension rate. Therefore, we try to design and fabricate the system that can simultaneously load the uniform AC magnetic field stimulation and the stretch stimulation to cells for enhancement of nerve axonal extension. Next, we evaluated systems performance and the effectiveness of each stimulation for rat adrenal pheochromocytoma cells (PC12). First, we designed and fabricated the uniform AC magnetic field system and the stretch stimulation system. For the AC magnetic stimulation system, we focused on the use of pole piece structure to carry out in-situ microscopic observation. We designed an optimum pole piece structure using the magnetic field finite element analyses and the response surface methodology. We fabricated the uniform AC magnetic field stimulation system as a bio-reactor by adopting analytically determined design specifications. We measured magnetic flux density that is generated by the uniform AC magnetic field stimulation system. We confirmed that measurement values show good agreement with analytical results, where the uniform magnetic field was observed. Second, we fabricated the cyclic stretch stimulation device under the conditions of particular strains, where the chamber was made of polyoxymethylene (POM). We measured strains in the PC12 cell culture region to confirm the uniform strain. We found slightly different values from the target strain. Finally, we concluded that these differences were allowable in this mechanical stimulation system. We evaluated the effectiveness of each stimulation to enhance the nerve axonal extension using PC12. We confirmed that the average axonal extension length of PC12 under the uniform AC magnetic stimulation was increased by 16 % at 96 h in our bio-reactor. We could not confirm that the axonal extension enhancement under the stretch stimulation condition, where we found the exfoliating of cells. Further, the hybrid stimulation enhanced the axonal extension. Because the magnetic stimulation inhibits the exfoliating of cells. Finally, we concluded that the enhancement of PC12 axonal extension is due to the magnetic stimulation rather than the mechanical stimulation. Finally, we confirmed that the effectiveness of the uniform AC magnetic field stimulation for the nerve axonal extension using PC12 cells.

Keywords: nerve cell PC12, axonal extension, nerve regeneration, electromagnetic-mechanical stimulation, bioreactor

Procedia PDF Downloads 264
518 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind System: Case Study

Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar

Abstract:

Having a daylit space together with view results in a pleasant and productive environment for office employees. A daylit space is a space which utilizes daylight as a basic source of illumination to fulfill user’s visual demands and minimizes the electric energy consumption. Malaysian weather is hot and humid all over the year because of its location in the equatorial belt. however, because most of the commercial buildings in Malaysia are air-conditioned, huge glass windows are normally installed in order to keep the physical and visual relation between inside and outside. As a result of climatic situation and mentioned new trend, an ordinary office has huge heat gain, glare, and discomfort for occupants. Balancing occupant’s comfort and energy conservation in a tropical climate is a real challenge. This study concentrates on evaluating a venetian blind system using per pixel analyzing tools based on the suggested cut-out metrics by the literature. Workplace area in a private office room has been selected as a case study. Eight-day measurement experiment was conducted to investigate the effect of different venetian blind angles in an office area under daylight conditions in Serdang, Malaysia. The study goal was to explore daylight comfort of a commercially available venetian blind system, its’ daylight sufficiency and excess (8:00 AM to 5 PM) as well as Glare examination. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based Evalglare and hdrscope help to investigate luminance-based metrics. The main key factors are illuminance and luminance levels, mean and maximum luminance, daylight glare probability (DGP) and luminance ratio of the selected mask regions. The findings show that in most cases, morning session needs artificial lighting in order to achieve daylight comfort. However, in some conditions (e.g. 10° and 40° slat angles) in the second half of day the workplane illuminance level exceeds the maximum of 2000 lx. Generally, a rising trend is discovered toward mean window luminance and the most unpleasant cases occur after 2 P.M. Considering the luminance criteria rating, the uncomfortable conditions occur in the afternoon session. Surprisingly in no blind condition, extreme case of window/task ratio is not common. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment.

Keywords: daylighting, energy simulation, office environment, Venetian blind

Procedia PDF Downloads 256
517 The Challenge of Assessing Social AI Threats

Authors: Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Abstract:

The European Union (EU) directive Artificial Intelligence (AI) Act in Article 9 requires that risk management of AI systems includes both technical and human oversight, while according to NIST_AI_RFM (Appendix C) and ENISA AI Framework recommendations, claim that further research is needed to understand the current limitations of social threats and human-AI interaction. AI threats within social contexts significantly affect the security and trustworthiness of the AI systems; they are interrelated and trigger technical threats as well. For example, lack of explainability (e.g. the complexity of models can be challenging for stakeholders to grasp) leads to misunderstandings, biases, and erroneous decisions. Which in turn impact the privacy, security, accountability of the AI systems. Based on the NIST four fundamental criteria for explainability it can also classify the explainability threats into four (4) sub-categories: a) Lack of supporting evidence: AI systems must provide supporting evidence or reasons for all their outputs. b) Lack of Understandability: Explanations offered by systems should be comprehensible to individual users. c) Lack of Accuracy: The provided explanation should accurately represent the system's process of generating outputs. d) Out of scope: The system should only function within its designated conditions or when it possesses sufficient confidence in its outputs. Biases may also stem from historical data reflecting undesired behaviors. When present in the data, biases can permeate the models trained on them, thereby influencing the security and trustworthiness of the of AI systems. Social related AI threats are recognized by various initiatives (e.g., EU Ethics Guidelines for Trustworthy AI), standards (e.g. ISO/IEC TR 24368:2022 on AI ethical concerns, ISO/IEC AWI 42105 on guidance for human oversight of AI systems) and EU legislation (e.g. the General Data Protection Regulation 2016/679, the NIS 2 Directive 2022/2555, the Directive on the Resilience of Critical Entities 2022/2557, the EU AI Act, the Cyber Resilience Act). Measuring social threats, estimating the risks to AI systems associated to these threats and mitigating them is a research challenge. In this paper it will present the efforts of two European Commission Projects (FAITH and THEMIS) from the HorizonEurope programme that analyse the social threats by building cyber-social exercises in order to study human behaviour, traits, cognitive ability, personality, attitudes, interests, and other socio-technical profile characteristics. The research in these projects also include the development of measurements and scales (psychometrics) for human-related vulnerabilities that can be used in estimating more realistically the vulnerability severity, enhancing the CVSS4.0 measurement.

Keywords: social threats, artificial Intelligence, mitigation, social experiment

Procedia PDF Downloads 65
516 Brain-Computer Interfaces That Use Electroencephalography

Authors: Arda Ozkurt, Ozlem Bozkurt

Abstract:

Brain-computer interfaces (BCIs) are devices that output commands by interpreting the data collected from the brain. Electroencephalography (EEG) is a non-invasive method to measure the brain's electrical activity. Since it was invented by Hans Berger in 1929, it has led to many neurological discoveries and has become one of the essential components of non-invasive measuring methods. Despite the fact that it has a low spatial resolution -meaning it is able to detect when a group of neurons fires at the same time-, it is a non-invasive method, making it easy to use without possessing any risks. In EEG, electrodes are placed on the scalp, and the voltage difference between a minimum of two electrodes is recorded, which is then used to accomplish the intended task. The recordings of EEGs include, but are not limited to, the currents along dendrites from synapses to the soma, the action potentials along the axons connecting neurons, and the currents through the synaptic clefts connecting axons with dendrites. However, there are some sources of noise that may affect the reliability of the EEG signals as it is a non-invasive method. For instance, the noise from the EEG equipment, the leads, and the signals coming from the subject -such as the activity of the heart or muscle movements- affect the signals detected by the electrodes of the EEG. However, new techniques have been developed to differentiate between those signals and the intended ones. Furthermore, an EEG device is not enough to analyze the data from the brain to be used by the BCI implication. Because the EEG signal is very complex, to analyze it, artificial intelligence algorithms are required. These algorithms convert complex data into meaningful and useful information for neuroscientists to use the data to design BCI devices. Even though for neurological diseases which require highly precise data, invasive BCIs are needed; non-invasive BCIs - such as EEGs - are used in many cases to help disabled people's lives or even to ease people's lives by helping them with basic tasks. For example, EEG is used to detect before a seizure occurs in epilepsy patients, which can then prevent the seizure with the help of a BCI device. Overall, EEG is a commonly used non-invasive BCI technique that has helped develop BCIs and will continue to be used to detect data to ease people's lives as more BCI techniques will be developed in the future.

Keywords: BCI, EEG, non-invasive, spatial resolution

Procedia PDF Downloads 71
515 Railway Ballast Volumes Automated Estimation Based on LiDAR Data

Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert

Abstract:

The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.

Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point

Procedia PDF Downloads 109
514 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil

Authors: Ana Julia C. Kfouri

Abstract:

A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.

Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort

Procedia PDF Downloads 386
513 Developing Family-Based Eco-Citizenship with Social Media: A Mixed Methods Collective Case Study of Families Looking to Adopt Ecologically Responsible Actions Using Facebook

Authors: Michel T. Leger, Shawn Martin

Abstract:

Leading an ecologically responsible lifestyle represents a difficult challenge. Though research in environmental education does point to an increase in the intention to act more responsibly towards the environment, this intent does not seem to translate to concrete ecological action. This mixed methods collective case study explores the adoption of ecological actions in the family, a context of socio-ecological transformation rarely examined in the scientific literature. More specifically, it takes into account the popular use of social media today to explore the potential role social media, namely Facebook, in promoting environmental action. In other words, for families who are intent on adopting an ecologically friendly lifestyle, could the use of Facebook positively affect the way family members relate to the environment and bring about real change in their daily household actions? To answer this question, twenty-one families living in an urban setting were recruited and then divided them into two distinct groups. The first group of families attempted to lower their household electrical bill as part of a private Facebook group, while the other aimed to do the same, but without the directed use of social media. For both groups, we recorded the amount of kilowatt-hours used during the project as well as the amount used for the same months the previous year, adjusting for temperature variations. Exit interviews were also conducted with each family in order to try to understand the processes of eco-citizenship development in the context of family. Results seem to suggest that both virtual social networks and one-on-one support can help to increase environmental awareness in participating family. Interestingly, families from the Facebook group seemed to demonstrate a higher degree of environmental engagement, and younger family members in this group were more active in the processes of collective behavioral change.

Keywords: environmental education, family-based eco-citizenship, social media, case study

Procedia PDF Downloads 150
512 The Interactions between Phosphorus Leaching and Lime Application in Undisturbed Soil Columns with Different Soil Textures

Authors: Faezeh Eslamian, Zhiming Qi, Michael J. Tate

Abstract:

Phosphorus losses from agricultural fields through leaching is one of the main contributors to eutrophication of lakes in Quebec as well as North America. The main objective of this study is to evaluate the application of high calcium hydrated lime as a soil amendment in reducing the subsurface transport of phosphorus to water bodies by studying the interactions between phosphorus leaching and lime application in three common agricultural soil textures (sandy loam, loam and clay loam) in Quebec. For this purpose, 6 intact soil columns of 10 cm diameter and 20 cm deep were taken from each of the three different soil textured agricultural fields. Lime (high calcium hydrated lime) was applied to the top 5 cm of half of the intact soil columns while the rest were left as controls. The columns were leached with artificial rainwater in-consecutively at a rate of 3 mm h-1 over a 90-day period. The total amount of water added was equal to the average total rainfall of the region in fall. The leachate samples were collected daily and analyzed for dissolved reactive phosphorus, total dissolved phosphorus, total phosphorus, pH, electrical conductivity, calcium, magnesium, potassium and iron. The results showed that lime was able to significantly reduce dissolved reactive phosphorus concentrations in the leachates by 70 and 40 percent in sandy loam and loam soil columns, respectively, while phosphorus concentration in the clay loam soil leachates were increased by 40 percent. The calcium in lime has P-binding capabilities. Soil chemical properties in sandy and loamy soils can affect phosphorus leaching, whereas, transport mechanisms in clay soils with macropores dominate phosphorus leaching behaviors. The presence of preferential pathways and cracks in the clay soil columns has led to a quick transport of phosphorus through the soil and the less contact time with the soil matrix, therefore, causing less opportunity for P sorption and larger P release. Application of lime to agricultural fields can be considered as a promising measure in mitigating phosphorus loss from sandy loam and loam soils.

Keywords: leaching, lime, phosphorus, soil texture

Procedia PDF Downloads 175
511 A Research Study on Planning of Water-Based Recreation Operation on the Deriner Reservoir and Its Near Around

Authors: Hi̇lal Surat

Abstract:

People who want to get rid of stress and intensive working tempo for a while head for recreation operations in order to get rest and have fun. Therefore, planning recreation operation makes contributions to social, physiological, economic and psychological development of an individual and the community in a way that the needs of people meet regularly and constantly. The rapid increase of world population rate makes necessary of benefit from natural or man-made resources in a multiple way. Dams and reservoirs which are built near urban area with the aim of electrical energy conversion and agricultural irrigation are considered as natural area providing various opportunities such as recreation operations. Dams have a great importance regarding to protection and improvement of water resources and coming into service of community. There should be a priority to protect these water resources, which are essential for nature and living organisms. It should be taken into consideration that these water resources are the most important input in the area and have high nature value to make sustainability of recreation effectiveness. The Deriner reservoir that has been built yet near the province of Artvin with natural and cultural properties is considered as an alternative option for meeting the needs of people for sportive and recreation activities and as a potential for planning of water-based recreation activities. Hence, in this study, activities that meet the expectations of people who get benefit from the area considering to natural, cultural and sportive recreation opportunities will be developed. In the first place, planning criteria for some sportive and water-based recreation operations will be defined in order to use the area for recreation and sportive activities and these criteria will be a base for a macro planning work within the holistic perspective of natural, cultural, and economical structure of the area. After this time, necessities of local people and evaluation of reservoir recreational potential will be determined, end then different socio-economic groups according to their in-come, age groups will be chosen and the questionnaire which has already prepared will be done these groups, as a result of these questionnaire recreational activities in water necessities will determine and we are going to develop different suggestion for this reservoir.

Keywords: dam, dam lakes, Deriner, recreation, water based activities

Procedia PDF Downloads 345
510 Designing Nickel Coated Activated Carbon (Ni/AC) Based Electrode Material for Supercapacitor Applications

Authors: Zahid Ali Ghazi

Abstract:

Supercapacitors (SCs) have emerged as auspicious energy storage devices because of their fast charge-discharge characteristics and high power densities. In the current study, a simple approach is used to coat activated carbon (AC) with a thin layer of nickel (Ni) by an electroless deposition process to enhance the electrochemical performance of the SC. The synergistic combination of large surface area and high electrical conductivity of the AC, as well as the pseudocapacitive behavior of the metallic Ni, has shown great potential to overcome the limitations of traditional SC materials. First, the materials were characterized using X-ray diffraction (XRD) for crystallography, scanning electron microscopy (SEM) for surface morphology and energy dispersion X-ray (EDX) for elemental analysis. The electrochemical performance of the nickel-coated activated carbon (Ni-AC) is systematically evaluated through various techniques, including galvanostatic charge-discharge (GCD), cyclic voltammetry (CV), and electrochemical impedance spectroscopy (EIS). The GCD results revealed that Ni/AC has a higher specific capacitance (1559 F/g) than bare AC (222 F/g) at 1 A/g current density in a 2 M KOH electrolyte. Even at a higher current density of 20 A/g, the Ni/AC showed a high capacitance of 944 F/g as compared to 77 F/g by AC. The specific capacitance (1318 F/g) calculated from CV measurements for Ni-AC at 10mV/sec was in close agreement with GCD data. Furthermore, the bare AC exhibited a low energy of 15 Wh/kg at a power density of 356 W/kg whereas, an energy density of 111 Wh/kg at a power density of 360 W/kg was achieved by Ni/AC-850 electrode and demonstrated a long life cycle with 94% capacitance retention over 50000 charge/discharge cycles at 10 A/g. In addition, the EIS study disclosed that the Rs and Rct values of Ni/AC electrodes were much lower than those of bare AC. The superior performance of Ni/AC is mainly attributed to the presence of excessive redox active sites, large electroactive surface area and corrosive resistance properties of Ni. We believe that this study will provide new insights into the controlled coating of ACs and other porous materials with metals for developing high-performance SCs and other energy storage devices.

Keywords: supercapacitor, cyclic voltammetry, coating, energy density, activated carbon

Procedia PDF Downloads 63
509 Cross-Comparison between Land Surface Temperature from Polar and Geostationary Satellite over Heterogenous Landscape: A Case Study in Hong Kong

Authors: Ibrahim A. Adeniran, Rui F. Zhu, Man S. Wong

Abstract:

Owing to the insufficiency in the spatial representativeness and continuity of in situ temperature measurements from weather stations (WS), the use of temperature measurement from WS for large-range diurnal analysis in heterogenous landscapes has been limited. This has made the accurate estimation of land surface temperature (LST) from remotely sensed data more crucial. Moreover, the study of dynamic interaction between the atmosphere and the physical surface of the Earth could be enhanced at both annual and diurnal scales by using optimal LST data derived from satellite sensors. The tradeoff between the spatial and temporal resolution of LSTs from satellite’s thermal infrared sensors (TIRS) has, however, been a major challenge, especially when high spatiotemporal LST data are recommended. It is well-known from existing literature that polar satellites have the advantage of high spatial resolution, while geostationary satellites have a high temporal resolution. Hence, this study is aimed at designing a framework for the cross-comparison of LST data from polar and geostationary satellites in a heterogeneous landscape. This could help to understand the relationship between the LST estimates from the two satellites and, consequently, their integration in diurnal LST analysis. Landsat-8 satellite data will be used as the representative of the polar satellite due to the availability of its long-term series, while the Himawari-8 satellite will be used as the data source for the geostationary satellite because of its improved TIRS. For the study area, Hong Kong Special Administrative Region (HK SAR) will be selected; this is due to the heterogeneity in the landscape of the region. LST data will be retrieved from both satellites using the Split window algorithm (SWA), and the resulting data will be validated by comparing satellite-derived LST data with temperature data from automatic WS in HK SAR. The LST data from the satellite data will then be separated based on the land use classification in HK SAR using the Global Land Cover by National Mapping Organization version3 (GLCNMO 2013) data. The relationship between LST data from Landsat-8 and Himawari-8 will then be investigated based on the land-use class and over different seasons of the year in order to account for seasonal variation in their relationship. The resulting relationship will be spatially and statistically analyzed and graphically visualized for detailed interpretation. Findings from this study will reveal the relationship between the two satellite data based on the land use classification within the study area and the seasons of the year. While the information provided by this study will help in the optimal combination of LST data from Polar (Landsat-8) and geostationary (Himawari-8) satellites, it will also serve as a roadmap in the annual and diurnal urban heat (UHI) analysis in Hong Kong SAR.

Keywords: automatic weather station, Himawari-8, Landsat-8, land surface temperature, land use classification, split window algorithm, urban heat island

Procedia PDF Downloads 73
508 Diurnal Circle of Rainfall and Convective Properties over West and Central Africa

Authors: Balogun R. Ayodeji, Adefisan E. Adesanya, Adeyewa Z. Debo, E. C. Okogbue

Abstract:

The need to investigate diurnal weather circles in West Africa is coined in the fact that complex interactions often results from diurnal weather patterns. This study investigates diurnal circles of wind, rainfall and convective properties using six (6) hour interval data from the ERA-Interim and the Tropical Rainfall Measurement Mission (TRMM). The seven distinct zones, used in this work and classified as rainforest (west-coast, dry, Nigeria-Cameroon), Savannah (Nigeria, and Central Africa and South Sudan (CASS)), Sudano-Sahel, and Sahel, were clearly indicated by the rainfall pattern in each zones. Results showed that the land‐ocean warming contrast was more strongly sensitive to seasonal cycle and has been very weak during March-May (MAM) but clearly spelt out during June-September (JJAS). Dipoles of wind convergence/divergence and wet/dry precipitation, between CASS and Nigeria Savannah zones, were identified in morning and evening hours of MAM, whereas distinct night and day anomaly, in the same location of CASS, were found to be consistent during the JJAS season. Diurnal variation of convective properties showed that stratiform precipitation, due to the extremely low occurrence of flashcount climatology, was dominant during morning hours for both MAM and JJAS than other periods of the day. On the other hand, diurnal variation of the system sizes showed that small system sizes were most dominant during the day time periods for both MAM and JJAS, whereas larger system sizes were frequent during the evening, night, and morning hours. The locations of flashcount and system sizes agreed with earlier results that morning and day-time hours were dominated by stratiform precipitation and small system sizes respectively. Most results clearly showed that the eastern locations of Sudano and Sahel were consistently dry because rainfall and precipitation features were predominantly few. System sizes greater than or equal to 800 km² were found in the western axis of the Sudano and Sahel zones, whereas the eastern axis, particularly in the Sahel zone, had minimal occurrences of small/large system sizes. From the results of locations of extreme systems, flashcount greater than 275 in one single system was never observed during the morning (6Z) diurnal, whereas, the evening (18Z) diurnal had the most frequent cases (at least 8) of flashcount exceeding 275 in one single system. Results presented had shown the importance of diurnal variation in understanding precipitation, flashcount, system sizes patterns at diurnal scales, and understanding land-ocean contrast, precipitation, and wind field anomaly at diurnal scales.

Keywords: convective properties, diurnal circle, flashcount, system sizes

Procedia PDF Downloads 132
507 Electroencephalography Correlates of Memorability While Viewing Advertising Content

Authors: Victor N. Anisimov, Igor E. Serov, Ksenia M. Kolkova, Natalia V. Galkina

Abstract:

The problem of memorability of the advertising content is closely connected with the key issues of neuromarketing. The memorability of the advertising content contributes to the marketing effectiveness of the promoted product. Significant directions of studying the phenomenon of memorability are the memorability of the brand (detected through the memorability of the logo) and the memorability of the product offer (detected through the memorization of dynamic audiovisual advertising content - commercial). The aim of this work is to reveal the predictors of memorization of static and dynamic audiovisual stimuli (logos and commercials). An important direction of the research was revealing differences in psychophysiological correlates of memorability between static and dynamic audiovisual stimuli. We assumed that static and dynamic images are perceived in different ways and may have a difference in the memorization process. Objective methods of recording psychophysiological parameters while watching static and dynamic audiovisual materials are well suited to achieve the aim. The electroencephalography (EEG) method was performed with the aim of identifying correlates of the memorability of various stimuli in the electrical activity of the cerebral cortex. All stimuli (in the groups of statics and dynamics separately) were divided into 2 groups – remembered and not remembered based on the results of the questioning method. The questionnaires were filled out by survey participants after viewing the stimuli not immediately, but after a time interval (for detecting stimuli recorded through long-term memorization). Using statistical method, we developed the classifier (statistical model) that predicts which group (remembered or not remembered) stimuli gets, based on psychophysiological perception. The result of the statistical model was compared with the results of the questionnaire. Conclusions: Predictors of the memorability of static and dynamic stimuli have been identified, which allows prediction of which stimuli will have a higher probability of remembering. Further developments of this study will be the creation of stimulus memory model with the possibility of recognizing the stimulus as previously seen or new. Thus, in the process of remembering the stimulus, it is planned to take into account the stimulus recognition factor, which is one of the most important tasks for neuromarketing.

Keywords: memory, commercials, neuromarketing, EEG, branding

Procedia PDF Downloads 251
506 The Impact of Information and Communications Technology (ICT)-Enabled Service Adaptation on Quality of Life: Insights from Taiwan

Authors: Chiahsu Yang, Peiling Wu, Ted Ho

Abstract:

From emphasizing economic development to stressing public happiness, the international community mainly hopes to be able to understand whether the quality of life for the public is becoming better. The Better Life Index (BLI) constructed by OECD uses living conditions and quality of life as starting points to cover 11 areas of life and to convey the state of the general public’s well-being. In light of the BLI framework, the Directorate General of Budget, Accounting and Statistics (DGBAS) of the Executive Yuan instituted the Gross National Happiness Index to understand the needs of the general public and to measure the progress of the aforementioned conditions in residents across the island. Whereas living conditions consist of income and wealth, jobs and earnings, and housing conditions, health status, work and life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security. The ICT area consists of health care, living environment, ICT-enabled communication, transportation, government, education, pleasure, purchasing, job & employment. In the wake of further science and technology development, rapid formation of information societies, and closer integration between lifestyles and information societies, the public’s well-being within information societies has indeed become a noteworthy topic. the Board of Science and Technology of the Executive Yuan use the OECD’s BLI as a reference in the establishment of the Taiwan-specific ICT-Enabled Better Life Index. Using this index, the government plans to examine whether the public’s quality of life is improving as well as measure the public’s satisfaction with current digital quality of life. This understanding will enable the government to gauge the degree of influence and impact that each dimension of digital services has on digital life happiness while also serving as an important reference for promoting digital service development. The content of the ICT Enabled Better Life Index. Information and communications technology (ICT) has been affecting people’s living styles, and further impact people’s quality of life (QoL). Even studies have shown that ICT access and usage have both positive and negative impact on life satisfaction and well-beings, many governments continue to invest in e-government programs to initiate their path to information society. This research is the few attempts to link the e-government benchmark to the subjective well-being perception, and further address the gap between user’s perception and existing hard data assessment, then propose a model to trace measurement results back to the original public policy in order for policy makers to justify their future proposals.

Keywords: information and communications technology, quality of life, satisfaction, well-being

Procedia PDF Downloads 354
505 Marketization of Higher Education in the UK and Its Impacts on Teaching Practitioners

Authors: Hossein Rezaie

Abstract:

Academic institutions, esp. universities, have been known as cradles of learning and teaching great thinkers while creating the type of knowledge that is supposed to be bereft of utilitarian motives. Nonetheless, it seems that such intellectual centers have entered into a competition with each other for attracting the attention of potential clients. The traditional values of (higher) education such as nurturing criticality and fostering intellectuality in students have been replaced with strategic planning, quality assurance, performance assessment, and academic audits. Not being immune from the whims and wishes of marketization, the system of higher education in the UK has been recalibrated by policy makers to address the demand and supply of student education, academic research and other university activities on the basis of monetary factors. As an immediate example in this vein, the Russell Group in the UK, which is comprised of 24 leading UK research universities, has explicitly expressed it policy on its official website as follows: ‘Russell Group universities are global businesses competing for staff, students and funding with the best in the world’. Furthermore, certain attempts have been made to corporatize the system of HE which have been manifested in remodeling of university governing bodies on corporate lines and developing measurement scales for indicating the performance of teaching practitioners. Nevertheless, it seems that such structural changes in policies toward the system of HE have bearing on the practices of practitioners and educators as well as the identity of students who are the customers of educational services. The effects of marketization have been examined mainly in terms of students’ perceptions and motivation, institutional policies and university management. However, the teaching practitioner side seems to be an under-studied area with regard to any changes in its expectations, satisfaction and perception of professional identity in the aftermath of introducing market-wise values into HE of the UK. As a result, this research aims to investigate the possible outcomes of market-driven values on the practitioner side of HE in the UK and finally seeks to address the following research questions: 1-How is the change in the mission of HE in the UK reflected in institutional documents? 1-A- How is the change of mission represented in job adverts? 1-B- How is the change of mission represented in university prospectuses? 2-How are teaching practitioners represented regarding their roles and obligations in the prospectuses and job ads published by UK HE institutions? In order to address these questions, the researcher will analyze 30 prospectuses and job ads published by Russel Group universities by taking Critical Discourse Analysis as his point of departure and the analytical methods of genre analysis and Systemic Functional Linguistics to probe into the generic features and representation of participants, in this case teaching practitioners, in the selected corpus.

Keywords: higher education, job advertisements, marketization of higher education, prospectuses

Procedia PDF Downloads 247