Search results for: channel error correction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3452

Search results for: channel error correction

182 Wood Dust and Nanoparticle Exposure among Workers during a New Building Construction

Authors: Atin Adhikari, Aniruddha Mitra, Abbas Rashidi, Imaobong Ekpo, Jefferson Doehling, Alexis Pawlak, Shane Lewis, Jacob Schwartz

Abstract:

Building constructions in the US involve numerous wooden structures. Woods are routinely used in walls, framing floors, framing stairs, and making of landings in building constructions. Cross-laminated timbers are currently being used as construction materials for tall buildings. Numerous workers are involved in these timber based constructions, and wood dust is one of the most common occupational exposures for them. Wood dust is a complex substance composed of cellulose, polyoses and other substances. According to US OSHA, exposure to wood dust is associated with a variety of adverse health effects among workers, including dermatitis, allergic respiratory effects, mucosal and nonallergic respiratory effects, and cancers. The amount and size of particles released as wood dust differ according to the operations performed on woods. For example, shattering of wood during sanding operations produces finer particles than does chipping in sawing and milling industries. To our knowledge, how shattering, cutting and sanding of woods and wood slabs during new building construction release fine particles and nanoparticles are largely unknown. General belief is that the dust generated during timber cutting and sanding tasks are mostly large particles. Consequently, little attention has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor and conventional particle counters. This study was conducted in a large new building construction site in southern Georgia primarily during the framing of wooden side walls, inner partition walls, and landings. Exposure levels of nanoparticles (n = 10) were measured by a newly developed nanoparticle counter (TSI NanoScan SMPS Model 3910) at four different distances (5, 10, 15, and 30 m) from the work location. Other airborne particles (number of particles/m3) including PM2.5 and PM10 were monitored using a 6-channel (0.3, 0.5, 1.0, 2.5, 5.0 and 10 µm) particle counter at 15 m, 30 m, and 75 m distances at both upwind and downwind directions. Mass concentration of PM2.5 and PM10 (µg/m³) were measured by using a DustTrak Aerosol Monitor. Temperature and relative humidity levels were recorded. Wind velocity was measured by a hot wire anemometer. Concentration ranges of nanoparticles of 13 particle sizes were: 11.5 nm: 221 – 816/cm³; 15.4 nm: 696 – 1735/cm³; 20.5 nm: 879 – 1957/cm³; 27.4 nm: 1164 – 2903/cm³; 36.5 nm: 1138 – 2640/cm³; 48.7 nm: 938 – 1650/cm³; 64.9 nm: 759 – 1284/cm³; 86.6 nm: 705 – 1019/cm³; 115.5 nm: 494 – 1031/cm³; 154 nm: 417 – 806/cm³; 205.4 nm: 240 – 471/cm³; 273.8 nm: 45 – 92/cm³; and 365.2 nm: Keywords: wood dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 190
181 Applicability and Reusability of Fly Ash and Base Treated Fly Ash for Adsorption of Catechol from Aqueous Solution: Equilibrium, Kinetics, Thermodynamics and Modeling

Authors: S. Agarwal, A. Rani

Abstract:

Catechol is a natural polyphenolic compound that widely exists in higher plants such as teas, vegetables, fruits, tobaccos, and some traditional Chinese medicines. The fly ash-based zeolites are capable of absorbing a wide range of pollutants. But the process of zeolite synthesis is time-consuming and requires technical setups by the industries. The marketed costs of zeolites are quite high restricting its use by small-scale industries for the removal of phenolic compounds. The present research proposes a simple method of alkaline treatment of FA to produce an effective adsorbent for catechol removal from wastewater. The experimental parameter such as pH, temperature, initial concentration and adsorbent dose on the removal of catechol were studied in batch reactor. For this purpose the adsorbent materials were mixed with aqueous solutions containing catechol ranging in 50 – 200 mg/L initial concentrations and then shaken continuously in a thermostatic Orbital Incubator Shaker at 30 ± 0.1 °C for 24 h. The samples were withdrawn from the shaker at predetermined time interval and separated by centrifugation (Centrifuge machine MBL-20) at 2000 rpm for 4 min. to yield a clear supernatant for analysis of the equilibrium concentrations of the solutes. The concentrations were measured with Double Beam UV/Visible spectrophotometer (model Spectrscan UV 2600/02) at the wavelength of 275 nm for catechol. In the present study, the use of low-cost adsorbent (BTFA) derived from coal fly ash (FA), has been investigated as a substitute of expensive methods for the sequestration of catechol. The FA and BTFA adsorbents were well characterized by XRF, FE-SEM with EDX, FTIR, and surface area and porosity measurement which proves the chemical constituents, functional groups and morphology of the adsorbents. The catechol adsorption capacities of synthesized BTFA and native material were determined. The adsorption was slightly increased with an increase in pH value. The monolayer adsorption capacities of FA and BTFA for catechol were 100 mg g⁻¹ and 333.33 mg g⁻¹ respectively, and maximum adsorption occurs within 60 minutes for both adsorbents used in this test. The equilibrium data are fitted by Freundlich isotherm found on the basis of error analysis (RMSE, SSE, and χ²). Adsorption was found to be spontaneous and exothermic on the basis of thermodynamic parameters (ΔG°, ΔS°, and ΔH°). Pseudo-second-order kinetic model better fitted the data for both FA and BTFA. BTFA showed large adsorptive characteristics, high separation selectivity, and excellent recyclability than FA. These findings indicate that BTFA could be employed as an effective and inexpensive adsorbent for the removal of catechol from wastewater.

Keywords: catechol, fly ash, isotherms, kinetics, thermodynamic parameters

Procedia PDF Downloads 128
180 Rapid Atmospheric Pressure Photoionization-Mass Spectrometry (APPI-MS) Method for the Detection of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans in Real Environmental Samples Collected within the Vicinity of Industrial Incinerators

Authors: M. Amo, A. Alvaro, A. Astudillo, R. Mc Culloch, J. C. del Castillo, M. Gómez, J. M. Martín

Abstract:

Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) of course comprise a range of highly toxic compounds that may exist as particulates within the air or accumulate within water supplies, soil, or vegetation. They may be created either ubiquitously or naturally within the environment as a product of forest fires or volcanic eruptions. It is only since the industrial revolution, however, that it has become necessary to closely monitor their generation as a byproduct of manufacturing/combustion processes, in an effort to mitigate widespread contamination events. Of course, the environmental concentrations of these toxins are expected to be extremely low, therefore highly sensitive and accurate methods are required for their determination. Since ionization of non-polar compounds through electrospray and APCI is difficult and inefficient, we evaluate the performance of a novel low-flow Atmospheric Pressure Photoionization (APPI) source for the trace detection of various dioxins and furans using rapid Mass Spectrometry workflows. Air, soil and biota (vegetable matter) samples were collected monthly during one year from various locations within the vicinity of an industrial incinerator in Spain. Analytes were extracted and concentrated using soxhlet extraction in toluene and concentrated by rotavapor and nitrogen flow. Various ionization methods as electrospray (ES) and atmospheric pressure chemical ionization (APCI) were evaluated, however, only the low-flow APPI source was capable of providing the necessary performance, in terms of sensitivity, required for detecting all targeted analytes. In total, 10 analytes including 2,3,7,8-tetrachlorodibenzodioxin (TCDD) were detected and characterized using the APPI-MS method. Both PCDDs and PCFDs were detected most efficiently in negative ionization mode. The most abundant ion always corresponded to the loss of a chlorine and addition of an oxygen, yielding [M-Cl+O]- ions. MRM methods were created in order to provide selectivity for each analyte. No chromatographic separation was employed; however, matrix effects were determined to have a negligible impact on analyte signals. Triple Quadrupole Mass Spectrometry was chosen because of its unique potential for high sensitivity and selectivity. The mass spectrometer used was a Sciex´s Qtrap3200 working in negative Multi Reacting Monitoring Mode (MRM). Typically mass detection limits were determined to be near the 1-pg level. The APPI-MS2 technology applied to the detection of PCDD/Fs allows fast and reliable atmospheric analysis, minimizing considerably operational times and costs, with respect other technologies available. In addition, the limit of detection can be easily improved using a more sensitive mass spectrometer since the background in the analysis channel is very low. The APPI developed by SEADM allows polar and non-polar compounds ionization with high efficiency and repeatability.

Keywords: atmospheric pressure photoionization-mass spectrometry (APPI-MS), dioxin, furan, incinerator

Procedia PDF Downloads 211
179 The Use of Online Multimedia Platforms to Deliver a Regional Medical Schools Finals Revision Course During the COVID-19 Pandemic

Authors: Matthew Edmunds, Andrew Hunter, Clare Littlewood, Wisha Gul, Gabriel Heppenstall-Harris, Thomas Humphries

Abstract:

Background: Revision courses for medical students undertaking their final examinations are commonplace throughout the UK. Traditionally these take the form of a series of lectures over multiple weeks or a single day of intensive lectures. The COVID-19 pandemic, however, has required medical educators to create new teaching formats to ensure they adhere to social distancing requirements. It has provided an unexpected opportunity to accelerate the development of students proficiency in the use of ‘technology-enabled communication platforms’, as mandated in the 2018 GMC Outcomes of Graduates. Recent advances in technology have made distance learning possible, whilst also providing novel and more engaging learning opportunities for students. Foundation Year 2 doctors at Aintree University Hospital developed an online series of videos to help prepare medical students in the North West and byond for their final medical school examinations. Method: Eight hour-long videos covering the key topics in medicine and surgery were posted on the Peer Learning Liverpool Youtube channel. These videos were created using new technology such as the screen and audio recording platform, Loom. Each video compromised at least 20 single best answer (SBA) questions, in keeping with the format in most medical school finals. Explanations of the answers were provided, and additional important material was covered. Students were able to ask questions by commenting on the videos, with the authors replying as soon as possible. Feedback was collated using an online Google form. Results: An average of 327 people viewed each video, with 113 students filling in the feedback form. 65.5% of respondents were within one month of their final medical school examinations. The average rating for how well prepared the students felt for their finals was 6.21/10 prior to the course and 8.01/10 after the course. A paired t-test demonstrated a mean increase of 1.80 (95% CI 1.66-1.93). Overall, 98.2% said the online format worked well or very well, and 99.1% would recommend the course to a peer. Conclusions: Based on the feedback received, the online revision course was successful both in terms of preparing students for their final examinations, and with regards to how well the online format worked. Free-text qualitative feedback highlighted advantages such as; students could learn at their own pace, revisit key concepts important to them, and practice exam style questions via the case-based format. Limitations identified included inconsistent audiovisual quality, and requests for a live online Q&A session following the conclusion of the course. This course will be relaunched later in the year with increased opportunities for students to access live feedback. The success of this online course has shown the roll that technology can play in medical education. As well as providing novel teaching modes, online learning allows students to access resources that otherwise would not be available locally, and ensure that they do not miss out on teaching that was previously provided face to face, in the current climate of social distancing.

Keywords: COVID-19 pandemic, Medical School, Online learning, Revision course

Procedia PDF Downloads 158
178 Study of Formation and Evolution of Disturbance Waves in Annular Flow Using Brightness-Based Laser-Induced Fluorescence (BBLIF) Technique

Authors: Andrey Cherdantsev, Mikhail Cherdantsev, Sergey Isaenkov, Dmitriy Markovich

Abstract:

In annular gas-liquid flow, liquid flows as a film along pipe walls sheared by high-velocity gas stream. Film surface is covered by large-scale disturbance waves which affect pressure drop and heat transfer in the system and are necessary for entrainment of liquid droplets from film surface into the core of gas stream. Disturbance waves are a highly complex and their properties are affected by numerous parameters. One of such aspects is flow development, i.e., change of flow properties with the distance from the inlet. In the present work, this question is studied using brightness-based laser-induced fluorescence (BBLIF) technique. This method enables one to perform simultaneous measurements of local film thickness in large number of points with high sampling frequency. In the present experiments first 50 cm of upward and downward annular flow in a vertical pipe of 11.7 mm i.d. is studied with temporal resolution of 10 kHz and spatial resolution of 0.5 mm. Thus, spatiotemporal evolution of film surface can be investigated, including scenarios of formation, acceleration and coalescence of disturbance waves. The behaviour of disturbance waves' velocity depending on phases flow rates and downstream distance was investigated. Besides measuring the waves properties, the goal of the work was to investigate the interrelation between disturbance waves properties and integral characteristics of the flow such as interfacial shear stress and flow rate of dispersed phase. In particular, it was shown that the initial acceleration of disturbance waves, defined by the value of shear stress, linearly decays with downstream distance. This lack of acceleration which may even lead to deceleration is related to liquid entrainment. Flow rate of disperse phase linearly grows with downstream distance. During entrainment events, liquid is extracted directly from disturbance waves, reducing their mass, area of interaction to the gas shear and, hence, velocity. Passing frequency of disturbance waves at each downstream position was measured automatically with a new algorithm of identification of characteristic lines of individual disturbance waves. Scenarios of coalescence of individual disturbance waves were identified. Transition from initial high-frequency Kelvin-Helmholtz waves appearing at the inlet to highly nonlinear disturbance waves with lower frequency was studied near the inlet using 3D realisation of BBLIF method in the same cylindrical channel and in a rectangular duct with cross-section of 5 mm by 50 mm. It was shown that the initial waves are generally two-dimensional but are promptly broken into localised three-dimensional wavelets. Coalescence of these wavelets leads to formation of quasi two-dimensional disturbance waves. Using cross-correlation analysis, loss and restoration of two-dimensionality of film surface with downstream distance were studied quantitatively. It was shown that all the processes occur closer to the inlet at higher gas velocities.

Keywords: annular flow, disturbance waves, entrainment, flow development

Procedia PDF Downloads 255
177 A Qualitative Study of Newspaper Discourse and Online Discussions of Climate Change in China

Authors: Juan Du

Abstract:

Climate change is one of the most crucial issues of this era, with contentious debates on it among scholars. But there are sparse studies on climate change discourse in China. Including China in the study of climate change is essential for a sociological understanding of climate change. China -- as a developing country and an essential player in tackling climate change -- offers an ideal case for studying climate change for scholars moving beyond developed countries and enriching their understandings of climate change by including diverse social settings. This project contrasts the macro- and micro-level understandings of climate change in China, which helps scholars move beyond a focus on climate skepticism and denialism and enriches sociology of climate change knowledge. The macro-level understanding of climate change is obtained by analyzing over 4,000 newspaper articles from various official outlets in China. State-controlled newspapers play an essential role in transmitting essential and high-quality information and promoting broader public understanding of climate change and its anthropogenic nature. Thus, newspaper articles can be seen as tools employed by governments to mobilize the public in terms of supporting the development of a strategy shift from economy-growth to an ecological civilization. However, media is just one of the significant factors influencing an individual’s climate change concern. Extreme weather events, access to accurate scientific information, elite cues, and movement/countermovement advocacy influence an individual’s perceptions of climate change. Hence, there are differences in the ways that both newspaper articles and the public frame the issues. The online forum is an informative channel for scholars to understand the public’s opinion. The micro-level data comes from Zhihu, which is China’s equivalence of Quora. Users can propose, answer, and comment on questions. This project analyzes the questions related to climate change which have over 20 answers. By open-coding both the macro- and micro-level data, this project will depict the differences between ideology as presented in government-controlled newspapers and how people talk and act with respect to climate change in cyberspace, which may provide an idea about any existing disconnect in public behavior and their willingness to change daily activities to facilitate a greener society. The contemporary Yellow Vest protests in France illustrate that the large gap between governmental policies of climate change mitigation and the public’s understanding may lead to social movement activity and social instability. Effective environmental policy is impossible without the public’s support. Finding existing gaps in understanding may help policy-makers develop effective ways of framing climate change and obtain more supporters of climate change related policies. Overall, this qualitative project provides answers to the following research questions: 1) How do different state-controlled newspapers transmit their ideology on climate change to the public and in what ways? 2) How do individuals frame climate change online? 3) What are the differences between newspapers’ framing and individual’s framing?

Keywords: climate change, China, framing theory, media, public’s climate change concern

Procedia PDF Downloads 134
176 Communicating Nuclear Energy in Southeast Asia: A Cross-Country Comparison of Communication Channels and Source Credibility

Authors: Shirley S. Ho, Alisius X. L. D. Leong, Jiemin Looi, Agnes S. F. Chuah

Abstract:

Nuclear energy is a contentious technology that has attracted much public debate over the years. The prominence of nuclear energy in Southeast Asia (SEA) has burgeoned due to the surge of interest and plans for nuclear development in the region. Understanding public perceptions of nuclear energy in SEA is pertinent given the limited number of studies conducted. In particular, five SEA nations – Singapore, Malaysia, Indonesia, Thailand, and Vietnam are of immediate interest as that they are amongst the most economically developed or developing nations in the SEA region. High energy demands from economic development in these nations have led to considerations of adopting nuclear energy as an alternative source of energy. This study aims to explore whether differences in the nuclear developmental stage in each country affects public perceptions of nuclear energy. In addition, this study seeks to find out about the type and importance of communication credibility as a judgement heuristic in facilitating message acceptance across these five countries. Credibility of a communication channel is a crucial component influencing public perception, acceptance, and attitudes towards nuclear energy. Aside from simply identifying the frequently used communication channels, it is of greater significance to understand public perception of source and media credibility. Given the lack of studies conducted in SEA, this exploratory study adopts a qualitative approach to elicit a spectrum of opinions and insights regarding the key communication aspects influencing public perceptions of nuclear energy. Specifically, the capitals of each of the abovementioned countries - Kuala Lumpur, Bangkok, and Hanoi - were selected, with the exception of Singapore, an island city-state, and Yogyakarta, the most populous island of Indonesia to better understand public perception towards nuclear energy. Focus group discussions were utilized as the mode of data collection to elicit a wide variety of viewpoints held by the participants, which is well-suited for exploratory research. In total, 156 participants took part in the 13 focus group discussions. The participants were either local citizens or permanent residents aged between 18 and 69 years old. Each of the focus groups consists of 8-10 participants, including both male and female participants. The transcripts from each focus group were analysed using NVivo 10, and the text was organised according to the emerging themes or categories. The general public in all the countries was familiar but had no in-depth knowledge with nuclear energy. Four dimensions of nuclear energy communication were identified based on the focus group discussions: communication channels, perceived credibility of sources, circumstances for discussion, and discussion style. The first dimension, communication channels refers to the medium through which participants receive information about nuclear energy. Four types of media emerged from the discussions. They included online and social media, broadcast media, print media, and word-of- mouth (WOM). Collectively, across all five countries, participants were found to engage in different types of knowledge acquisition and information seeking behavior depending on the communication channels used.

Keywords: nuclear energy, public perception, communication, Southeast Asia, source credibility

Procedia PDF Downloads 309
175 Mobile Learning in Developing Countries: A Synthesis of the Past to Define the Future

Authors: Harriet Koshie Lamptey, Richard Boateng

Abstract:

Mobile learning (m-learning) is a novel approach to knowledge acquisition and dissemination and is gaining global attention. Steady progress in wireless technologies and the portability of communication devices continue to broaden the scope and use of mobiles. With the convergence of Web functionality onto mobile platforms and the affordability and availability of mobile technology, m-learning has the potential of being the next prevalent channel of education in both formal and informal settings. There is substantive literature on developed countries but the state in developing countries (DCs) however appears vague. This paper is a synthesis of extant literature on mobile learning in DCs. The research interest is based on the fact that in DCs, mobile communication and internet connectivity are popular. However, its use in education is under explored. There are some reviews on the state, conceptualizations, trends and teacher education, but to the authors’ knowledge, no study has focused on mobile learning adoption and integration issues. This study examines issues and gaps associated with its adoption and integration in DCs higher education institutions. A qualitative build-up of literature was conducted using articles pooled from electronic databases (Google Scholar and ERIC). To enable criteria for inclusion and incorporate diverse study perspectives, search terms used were m-learning, DCs, higher education institutions, challenges, benefits, impact, gaps and issues. The synthesis revealed that though mobile technology has diffused globally, its pedagogical pursuit in DCs remains quite low. The absence of a mobile Web and the difficulty of resource conversion into mobile format due to lack of funding and technical competence is a stumbling block. Again, the lack of established design and implementation rules to guide the development of m-learning platforms in DCs is a hindrance. The absence of access restrictions on devices poses security threats to institutional systems. Negative perceptions that devices are taking over faculty roles lead to resistance in some situations. Resistance to change can be a hindrance to the acceptance and success of new systems. Lack of interest for m-learning is also attributed to lower technological literacy levels of the underprivileged masses. Scholarly works on m-learning in DCs is yet to mature. Most technological innovations are handed down from developed countries, and this constantly creates a lag for DCs. Lack of theoretical grounding was also identified which reduces the objectivity of study reports. The socio-cultural terrain of DCs results in societies with different views and needs that have been identified as a hindrance to research. Institutional commitment decisions, adequate funding for the necessary infrastructural development as well as multiple stakeholder participation is important for project success. Evidence suggests that while adoption decisions are readily made, successful integration of the concept for its full benefits to be realized is often neglected. Recommendations to findings were made to provide possible remedies to identified issues.

Keywords: developing countries, higher education institutions, mobile learning, literature review

Procedia PDF Downloads 229
174 The Development of Traffic Devices Using Natural Rubber in Thailand

Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern

Abstract:

Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.

Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware

Procedia PDF Downloads 132
173 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 172
172 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 256
171 Development of Biosensor Chip for Detection of Specific Antibodies to HSV-1

Authors: Zatovska T. V., Nesterova N. V., Baranova G. V., Zagorodnya S. D.

Abstract:

In recent years, biosensor technologies based on the phenomenon of surface plasmon resonance (SPR) are becoming increasingly used in biology and medicine. Their application facilitates exploration in real time progress of binding of biomolecules and identification of agents that specifically interact with biologically active substances immobilized on the biosensor surface (biochips). Special attention is paid to the use of Biosensor analysis in determining the antibody-antigen interaction in the diagnostics of diseases caused by viruses and bacteria. According to WHO, the diseases that are caused by the herpes simplex virus (HSV), take second place (15.8%) after influenza as a cause of death from viral infections. Current diagnostics of HSV infection include PCR and ELISA assays. The latter allows determination the degree of immune response to viral infection and respective stages of its progress. In this regard, the searches for new and available diagnostic methods are very important. This work was aimed to develop Biosensor chip for detection of specific antibodies to HSV-1 in the human blood serum. The proteins of HSV1 (strain US) were used as antigens. The viral particles were accumulated in cell culture MDBK and purified by differential centrifugation in cesium chloride density gradient. Analysis of the HSV1 proteins was performed by polyacrylamide gel electrophoresis and ELISA. The protein concentration was measured using De Novix DS-11 spectrophotometer. The device for detection of antigen-antibody interactions was an optoelectronic two-channel spectrometer ‘Plasmon-6’, using the SPR phenomenon in the Krechman optical configuration. It was developed at the Lashkarev Institute of Semiconductor Physics of NASU. The used carrier was a glass plate covered with 45 nm gold film. Screening of human blood serums was performed using the test system ‘HSV-1 IgG ELISA’ (GenWay, USA). Development of Biosensor chip included optimization of conditions of viral antigen sorption and analysis steps. For immobilization of viral proteins 0.2% solution of Dextran 17, 200 (Sigma, USA) was used. Sorption of antigen took place at 4-8°C within 18-24 hours. After washing of chip, three times with citrate buffer (pH 5,0) 1% solution of BSA was applied to block the sites not occupied by viral antigen. It was found direct dependence between the amount of immobilized HSV1 antigen and SPR response. Using obtained biochips, panels of 25 positive and 10 negative for the content of antibodies to HSV-1 human sera were analyzed. The average value of SPR response was 185 a.s. for negative sera and from 312 to. 1264 a.s. for positive sera. It was shown that SPR data were agreed with ELISA results in 96% of samples proving the great potential of SPR in such researches. It was investigated the possibility of biochip regeneration and it was shown that application of 10 mM NaOH solution leads to rupture of intermolecular bonds. This allows reuse the chip several times. Thus, in this study biosensor chip for detection of specific antibodies to HSV1 was successfully developed expanding a range of diagnostic methods for this pathogen.

Keywords: biochip, herpes virus, SPR

Procedia PDF Downloads 421
170 Keeping under the Hat or Taking off the Lid: Determinants of Social Enterprise Transparency

Authors: Echo Wang, Andrew Li

Abstract:

Transparency could be defined as the voluntary release of information by institutions that is relevant to their own evaluation. Transparency based on information disclosure is recognised to be vital for the Third Sector, as civil society organisations are under pressure to become more transparent to answer the call for accountability. The growing importance of social enterprises as hybrid organisations emerging from the nexus of the public, the private and the Third Sector makes their transparency a topic worth exploring. However, transparency for social enterprises has not yet been studied: as a new form of organisation that combines non-profit missions with commercial means, it is unclear to both the practical and the academic world if the shift in operational logics from non-profit motives to for-profit pursuits has significantly altered their transparency. This is especially so in China, where informational governance and practices of information disclosure by local governments, industries and civil society are notably different from other countries. This study investigates the transparency-seeking behaviour of social enterprises in Greater China to understand what factors at the organisational level may affect their transparency, measured by their willingness to disclose financial information. We make use of the Survey on the Models and Development Status of Social Enterprises in the Greater China Region (MDSSGCR) conducted in 2015-2016. The sample consists of more than 300 social enterprises from the Mainland, Hong Kong and Taiwan. While most respondents have provided complete answers to most of the questions, there is tremendous variation in the respondents’ demonstrated level of transparency in answering those questions related to the financial aspects of their organisations, such as total revenue, net profit, source of revenue and expense. This has led to a lot of missing data on such variables. In this study, we take missing data as data. Specifically, we use missing values as a proxy for an organisation’s level of transparency. Our dependent variables are constructed from missing data on total revenue, net profit, source of revenue and cost breakdown. In addition, we also take into consideration the quality of answers in coding the dependent variables. For example, to be coded as being transparent, an organization must report the sources of at least 50% of its revenue. We have four groups of predictors of transparency, namely nature of organization, decision making body, funding channel and field of concentration. Furthermore, we control for an organisation’s stage of development, self-identity and region. The results show that social enterprises that are at their later stages of organisational development and are funded by financial means are significantly more transparent than others. There is also some evidence that social enterprises located in the Northeast region in China are less transparent than those located in other regions probably because of local political economy features. On the other hand, the nature of the organisation, the decision-making body and field of concentration do not systematically affect the level of transparency. This study provides in-depth empirical insights into the information disclosure behaviour of social enterprises under specific social context. It does not only reveal important characteristics of Third Sector development in China, but also contributes to the general understanding of hybrid institutions.

Keywords: China, information transparency, organisational behaviour, social enterprise

Procedia PDF Downloads 187
169 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning

Authors: Saahith M. S., Sivakami R.

Abstract:

In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.

Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis

Procedia PDF Downloads 41
168 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 226
167 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion

Authors: Roberta Martino, Viviana Ventre

Abstract:

From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.

Keywords: impatience, risk aversion, subjective probability, uncertainty

Procedia PDF Downloads 111
166 A Comparison of Proxemics and Postural Head Movements during Pop Music versus Matched Music Videos

Authors: Harry J. Witchel, James Ackah, Carlos P. Santos, Nachiappan Chockalingam, Carina E. I. Westling

Abstract:

Introduction: Proxemics is the study of how people perceive and use space. It is commonly proposed that when people like or engage with a person/object, they will move slightly closer to it, often quite subtly and subconsciously. Music videos are known to add entertainment value to a pop song. Our hypothesis was that by adding appropriately matched video to a pop song, it would lead to a net approach of the head to the monitor screen compared to simply listening to an audio-only version of the song. Methods: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5 x 27 cm monitor two musical stimuli in a counterbalanced order; all stimuli were based on music videos by the band OK Go: Here It Goes Again (HIGA, boredom ratings (0-100) = 15.00 ± 4.76, mean ± SEM, standard-error-of-the-mean) and Do What You Want (DWYW, boredom ratings = 23.93 ± 5.98), which did not differ in boredom elicited (P = 0.21, rank-sum test). Each participant experienced each song only once, and one song (counterbalanced) as audio-only versus the other song as a music video. The movement was measured by video-tracking using Kinovea 0.8, based on recording from a lateral aspect; before beginning, each participant had a reflective motion tracking marker placed on the outer canthus of the left eye. Analysis of the Kinovea X-Y coordinate output in comma-separated-variables format was performed in Matlab, as were non-parametric statistical tests. Results: We found that the audio-only stimuli (combined for both HIGA and DWYW, mean ± SEM, 35.71 ± 5.36) were significantly more boring than the music video versions (19.46 ± 3.83, P = 0.0066 Wilcoxon Signed Rank Test (WSRT), Cohen's d = 0.658, N = 28). We also found that participants' heads moved around twice as much during the audio-only versions (speed = 0.590 ± 0.095 mm/sec) compared to the video versions (0.301 ± 0.063 mm/sec, P = 0.00077, WSRT). However, the participants' mean head-to-screen distances were not detectably smaller (i.e. head closer to the screen) during the music videos (74.4 ± 1.8 cm) compared to the audio-only stimuli (73.9 ± 1.8 cm, P = 0.37, WSRT). If anything, during the audio-only condition, they were slightly closer. Interestingly, the ranges of the head-to-screen distances were smaller during the music video (8.6 ± 1.4 cm) compared to the audio-only (12.9 ± 1.7 cm, P = 0.0057, WSRT), the standard deviations were also smaller (P = 0.0027, WSRT), and their heads were held 7 mm higher (video 116.1 ± 0.8 vs. audio-only 116.8 ± 0.8 cm above floor, P = 0.049, WSRT). Discussion: As predicted, sitting and listening to experimenter-selected pop music was more boring than when the music was accompanied by a matched, professionally-made video. However, we did not find that the proxemics of the situation led to approaching the screen. Instead, adding video led to efforts to control the head to a more central and upright viewing position and to suppress head fidgeting.

Keywords: boredom, engagement, music videos, posture, proxemics

Procedia PDF Downloads 170
165 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 153
164 Exposure to Radon on Air in Tourist Caves in Bulgaria

Authors: Bistra Kunovska, Kremena Ivanova, Jana Djounova, Desislava Djunakova, Zdenka Stojanovska

Abstract:

The carcinogenic effects of radon as a radioactive noble gas have been studied and show a strong correlation between radon exposure and lung cancer occurrence, even in the case of low radon levels. The major part of the natural radiation dose in humans is received by inhaling radon and its progenies, which originates from the decay chain of U-238. Indoor radon poses a substantial threat to human health when build-up occurs in confined spaces such as homes, mines and caves and the risk increases with the duration of radon exposure and is proportional to both the radon concentration and the time of exposure. Tourist caves are a case of special environmental conditions that may be affected by high radon concentration. Tourist caves are a recognized danger in terms of radon exposure to cave workers (guides, employees working in shops built above the cave entrances, etc.), but due to the sensitive nature of the cave environment, high concentrations cannot be easily removed. Forced ventilation of the air in the caves is considered unthinkable due to the possible harmful effects on the microclimate, flora and fauna. The risks to human health posed by exposure to elevated radon levels in caves are not well documented. Various studies around the world often detail very high concentrations of radon in caves and exposure of employees but without a follow-up assessment of the overall impact on human health. This study was developed in the implementation of a national project to assess the potential health effects caused by exposure to elevated levels of radon in buildings with public access under the National Science Fund of Bulgaria, in the framework of grant No КП-06-Н23/1/07.12.2018. The purpose of the work is to assess the radon level in Bulgarian caves and the exposure of the visitors and workers. The number of caves (sampling size) was calculated for simple random selection from total available caves 65 (sampling population) are 13 caves with confidence level 95 % and confidence interval (margin of error) approximately 25 %. A measurement of the radon concentration in air at specific locations in caves was done by using CR-39 type nuclear track-etch detectors that were placed by the participants in the research team. Despite the fact that all of the caves were formed in karst rocks, the radon levels were rather different from each other (97–7575 Bq/m3). An assessment of the influence of the orientation of the caves in the earth's surface (horizontal, inclined, vertical) on the radon concentration was performed. Evaluation of health hazards and radon risk exposure causing by inhaling the radon and its daughter products in each surveyed caves was done. Reducing the time spent in the cave has been recommended in order to decrease the exposure of workers.

Keywords: tourist caves, radon concentration, exposure, Bulgaria

Procedia PDF Downloads 192
163 Arthropods Diversity of the Late Carboniferous Souss Basin, Morocco: Paleoecology and Taphonomy

Authors: Abouchouaib Belahmira, Joerg W. Schneider, Hafid Saber

Abstract:

Continental sediments of the uppermost Carboniferous (late Pennsylvanian) El Menizla and Oued Issene formations of the Souss basin, Southwestern High Atlas Mountains, Morocco have yielded abundant well-preserved arthropods. The latter comprise freshwater and terrestrial elements, were found associated with plants, freshwater jellyfish and pelecypods. Arthropods are ubiquitous and typically restricted to the dominated lacustrine black shale taphofacies. The lithofacies interpretation and its correlation with the taphofacies led to the determination of the original depositional environment that was reconstructed as a fluvial-dominated with braided wide channel system and floodplain lakes to peat local backswamps sub-environments. The late Carboniferous fossiliferous strata have been correlated biostratigraphically with many other Pennsylvanian (Kasimovian/Gzhelian) deposits of North America and Europe on the basis of entomological studies. The faunal elements of the lentic biocoenosis of the Souss basin are depauperate, with the vagile forms slightly diverse than sessile ones. The prevailing groups are small shelly fauna, other habitat guild such as apterygotan Monura insect dasyleptids. The fossils recorded from the Souss basin includes crustaceans, of various sizes (µm- to mm) and morphologies, preservation state ranging from poorly preserved to rarely well-preserved specimens. Their remains sporadically found clustered and preserved as internal or external shell molds or steinkerns often disarticulated specimens. Ostracods as more likely Carbonita, their shells are preserved three-dimensionally. The clam shrimps conchostracans record of the Souss basin are often determined as pseudestherids and the Spinicaudatan leaiids. The moldic preservation is somewhat similar to pelecypods, they are known from internal casts or impressions. Monura insects are characterized by their low diversity, thus, only two species are known Dasyleptus lucasi Brongniart and Dasyleptus noli Rasnitsyn. The terrestrial component consists of pterygotan insects. They are diverse, significantly more frequent throughout the Souss basin fossil localities, numerically dominated by the members of Blattodea (cockroaches). The fossil record includes Blattodea, Protorthoptera, Diaphanopterodea, Ephemeroptera (mayfly) , Calneurodea, Grylloblattodea, Miomoptera and Palaeodictyoptera. Additionally, the composition of the preserved insect is mostly represented by completely isolated forewings, rare membranous hindwings, parts of the body or exceptionally preserved specimens, which may reflect a wide spectrum of taphonomic pathways. The steady increase in taxonomic diversity of fossil sites in the Souss basin, together with the taphonomic interpretation of arthropods assemblages, have contributed to provide a novel insight into the complex terrestrial ecosystem that thrived in this paleotropical key region during the late Pennsylvanian and additionally to understand climate-driven paleobiogeography and paleoecology of late Paleozoic non-marine arthropods.

Keywords: Souss, carboniferous, arthropods, taphonomy, paleoecology.

Procedia PDF Downloads 39
162 Exploring Digital Media’s Impact on Sports Sponsorship: A Global Perspective

Authors: Sylvia Chan-Olmsted, Lisa-Charlotte Wolter

Abstract:

With the continuous proliferation of media platforms, there have been tremendous changes in media consumption behaviors. From the perspective of sports sponsorship, while there is now a multitude of platforms to create brand associations, the changing media landscape and shift of message control also mean that sports sponsors will have to take into account the nature of and consumer responses toward these emerging digital media to devise effective marketing strategies. Utilizing the personal interview methodology, this study is qualitative and exploratory in nature. A total of 18 experts from European and American academics, sports marketing industry, and sports leagues/teams were interviewed to address three main research questions: 1) What are the major changes in digital technologies that are relevant to sports sponsorship; 2) How have digital media influenced the channels and platforms of sports sponsorship; and 3) How have these technologies affected the goals, strategies, and measurement of sports sponsorship. The study found that sports sponsorship has moved from consumer engagement, engagement measurement, and consequences of engagement on brand behaviors to micro-targeting one on one, engagement by context, time, and space, and activation and leveraging based on tracking and databases. From the perspective of platforms and channels, the use of mobile devices is prominent during sports content consumption. Increasing multiscreen media consumption means that sports sponsors need to optimize their investment decisions in leagues, teams, or game-related content sources, as they need to go where the fans are most engaged in. The study observed an imbalanced strategic leveraging of technology and digital infrastructure. While sports leagues have had less emphasis on brand value management via technology, sports sponsors have been much more active in utilizing technologies like mobile/LBS tools, big data/user info, real-time marketing and programmatic, and social media activation. Regardless of the new media/platforms, the study found that integration and contextualization are the two essential means of improving sports sponsorship effectiveness through technology. That is, how sponsors effectively integrate social media/mobile/second screen into their existing legacy media sponsorship plan so technology works for the experience/message instead of distracting fans. Additionally, technological advancement and attention economy amplify the importance of consumer data gathering, but sports consumer data does not mean loyalty or engagement. This study also affirms the benefit of digital media as they offer viral and pre-event activations through storytelling way before the actual event, which is critical for leveraging brand association before and after. That is, sponsors now have multiple opportunities and platforms to tell stories about their brands for longer time period. In summary, digital media facilitate fan experience, access to the brand message, multiplatform/channel presentations, storytelling, and content sharing. Nevertheless, rather than focusing on technology and media, today’s sponsors need to define what they want to focus on in terms of content themes that connect with their brands and then identify the channels/platforms. The big challenge for sponsors is to play to the venues/media’s specificity and its fit with the target audience and not uniformly deliver the same message in the same format on different platforms/channels.

Keywords: digital media, mobile media, social media, technology, sports sponsorship

Procedia PDF Downloads 298
161 Microplastics Accumulation and Abundance Standardization for Fluvial Sediments: Case Study for the Tena River

Authors: Mishell E. Cabrera, Bryan G. Valencia, Anderson I. Guamán

Abstract:

Human dependence on plastic products has led to global pollution, with plastic particles ranging in size from 0.001 to 5 millimeters, which are called microplastics (hereafter, MPs). The abundance of microplastics is used as an indicator of pollution. However, reports of pollution (abundance of MPs) in river sediments do not consider that the accumulation of sediments and MPs depends on the energy of the river. That is, the abundance of microplastics will be underestimated if the sediments analyzed come from places where the river flows with a lot of energy, and the abundance will be overestimated if the sediment analyzed comes from places where the river flows with less energy. This bias can generate an error greater than 300% of the MPs value reported for the same river and should increase when comparisons are made between 2 rivers with different characteristics. Sections where the river flows with higher energy allow sands to be deposited and limit the accumulation of MPs, while sections, where the same river has lower energy, allow fine sediments such as clays and silts to be deposited and should facilitate the accumulation of MPs particles. That is, the abundance of MPs in the same river is underrepresented when the sediment analyzed is sand, and the abundance of MPs is overrepresented if the sediment analyzed is silt or clay. The present investigation establishes a protocol aimed at incorporating sample granulometry to calibrate MPs quantification and eliminate over- or under-representation bias (hereafter granulometric bias). A total of 30 samples were collected by taking five samples within six work zones. The slope of the sampling points was less than 8 degrees, referred to as low slope areas, according to the Van Zuidam slope classification. During sampling, blanks were used to estimate possible contamination by MPs during sampling. Samples were dried at 60 degrees Celsius for three days. A flotation technique was employed to isolate the MPs using sodium metatungstate with a density of 2 gm/l. For organic matter digestion, 30% hydrogen peroxide and Fenton were used at a ratio of 6:1 for 24 hours. The samples were stained with rose bengal at a concentration of 200 mg/L and were subsequently dried in an oven at 60 degrees Celsius for 1 hour to be identified and photographed in a stereomicroscope with the following conditions: Eyepiece magnification: 10x, Zoom magnification (zoom knob): 4x, Objective lens magnification: 0.35x for analysis in ImageJ. A total of 630 fibers of MPs were identified, mainly red, black, blue, and transparent colors, with an overall average length of 474,310 µm and an overall median length of 368,474 µm. The particle size of the 30 samples was calculated using 100 g per sample using sieves with the following apertures: 2 mm, 1 mm, 500 µm, 250 µm, 125 µm and 0.63 µm. This sieving allowed a visual evaluation and a more precise quantification of the microplastics present. At the same time, the weight of sediment in each fraction was calculated, revealing an evident magnitude: as the presence of sediment in the < 63 µm fraction increases, a significant increase in the number of MPs particles is observed.

Keywords: microplastics, pollution, sediments, Tena River

Procedia PDF Downloads 76
160 Problems and Solutions in the Application of ICP-MS for Analysis of Trace Elements in Various Samples

Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Áron Soós, Xénia Vágó, Dávid Andrási

Abstract:

In agriculture for analysis of elements in different food and food raw materials, moreover environmental samples generally flame atomic absorption spectrometers (FAAS), graphite furnace atomic absorption spectrometers (GF-AAS), inductively coupled plasma optical emission spectrometers (ICP-OES) and inductively coupled plasma mass spectrometers (ICP-MS) are routinely applied. An inductively coupled plasma mass spectrometer (ICP-MS) is capable for analysis of 70-80 elements in multielemental mode, from 1-5 cm3 volume of a sample, moreover the detection limits of elements are in µg/kg-ng/kg (ppb-ppt) concentration range. All the analytical instruments have different physical and chemical interfering effects analysing the above types of samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays there is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better (smaller) detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium, arsenic, germanium, vanadium and chromium. To elaborate an analytical method for trace elements with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) Physical interferences; 2) Spectral interferences (elemental and molecular isobaric); 3) Effect of easily ionisable elements; 4) Memory interferences. Analysing food and food raw materials, moreover environmental samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food and food raw materials, moreover environmental samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of the applied elements. So finally we could find “opportunities” to decrease or eliminate the error of the analyses of applied elements (Cr, Co, Ni, Cu, Zn, Ge, As, Se, Mo, Cd, Sn, Sb, Te, Hg, Pb, Bi). To analyse these elements in the above samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of the above elements, which can be corrected using different internal standards.

Keywords: elements, environmental and food samples, ICP-MS, interference effects

Procedia PDF Downloads 505
159 A Land Use Decision-Making System to Stop Sprawl and Build Holistic, Organic Communities

Authors: Kirk Wickersham

Abstract:

Introduction: Sprawl has been built for the auto. This project anticipates the adoption of autonomous vehicle technology to both enable and require a new urban form – a modern version of the organic form humans have developed over the millennia. It proposes a radically new land use decision-making system to stop further sprawl and channel growth into these new communities. Methodology: For the past 80 years we have built sprawl and strip commercial development – intense commercial and multifamily on the periphery, low-density housing in the center, repeated indefinitely across the landscape. Sprawl is designed to accommodate the auto, and we need an auto to live there. That will change. Within a decade, autonomous vehicles (AVs) and especially robotaxis will replace human-driven vehicles (HDVs). These new vehicles will require a new transportation network that will both enable and require a new urban form. It will resemble the organic urban form developed over millennia – high-intensity uses in the center, surrounded by neighborhoods, with a defined outer boundary – a city limit. The project dubs this new community a HOME Town Holistic, Organic, Market-driven, and Ergonomic. It will offer a better quality of life at a lower public and private cost. (While designing a transportation system primarily for alternative vehicles is not a requirement for creating a holistic, organic community, but it is the main reason for the reduced cost of housing, transportation, and public services). Sprawl is created by our existing land use decision-making system – local governments approving one incremental project at a time. To create these new communities, we will need a radically different system. This means regional planning, eliminating development-by-right zoning and incremental development approvals, new standards for roadways and parking, selection of a lead developer, designating and master planning a new community site, channeling development into the new community, and providing equity for the landowners who have been left out of the process. This new process is based on and inspired by state regulation of oilfield development, called “unitization and pooling.” It is designed to fit within standard state land use enabling legislation, although the states vary on statutory language and case law. The specific implementation program will vary from one community to the next depending on opportunities and constraints to development, legal and political acceptance. Major Findings: The problems of sprawl and strip commercial development are well known. The quality of life and efficiencies in a holistic, organic, ergonomic community have also been well known for centuries. Now, an integrated planning, legal and regulatory process has been developed to replace sprawl with a 21st century version of these communities. Conclusion: This project offers the opportunity to transform the urban landscape, and urban life, in the 21st century. The process is ready for implementation and the author invites inquiries from developers and communities.

Keywords: autonomous vehicles, community, home town, land use decision-making system, quality of life, sprawl, strip commercial

Procedia PDF Downloads 7
158 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 357
157 On-Ice Force-Velocity Modeling Technical Considerations

Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra

Abstract:

Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.   

Keywords: ice-hockey, sprint, skating, power

Procedia PDF Downloads 105
156 STML: Service Type-Checking Markup Language for Services of Web Components

Authors: Saqib Rasool, Adnan N. Mian

Abstract:

Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.

Keywords: REST, STML, type checking, web component

Procedia PDF Downloads 257
155 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 131
154 Operation System for Aluminium-Air Cell: A Strategy to Harvest the Energy from Secondary Aluminium

Authors: Binbin Chen, Dennis Y. C. Leung

Abstract:

Aluminium (Al) -air cell holds a high volumetric capacity density of 8.05 Ah cm-3, benefit from the trivalence of Al ions. Additional benefits of Al-air cell are low price and environmental friendliness. Furthermore, the Al energy conversion process is characterized of 100% recyclability in theory. Along with a large base of raw material reserve, Al attracts considerable attentions as a promising material to be integrated within the global energy system. However, despite the early successful applications in military services, several problems exist that prevent the Al-air cells from widely civilian use. The most serious issue is the parasitic corrosion of Al when contacts with electrolyte. To overcome this problem, super-pure Al alloyed with various traces of metal elements are used to increase the corrosion resistance. Nevertheless, high-purity Al alloys are costly and require high energy consumption during production process. An alternative approach is to add inexpensive inhibitors directly into the electrolyte. However, such additives would increase the internal ohmic resistance and hamper the cell performance. So far these methods have not provided satisfactory solutions for the problem within Al-air cells. For the operation of alkaline Al-air cell, there are still other minor problems. One of them is the formation of aluminium hydroxide in the electrolyte. This process decreases ionic conductivity of electrolyte. Another one is the carbonation process within the gas diffusion layer of cathode, blocking the porosity of gas diffusion. Both these would hinder the performance of cells. The present work optimizes the above problems by building an Al-air cell operation system, consisting of four components. A top electrolyte tank containing fresh electrolyte is located at a high level, so that it can drive the electrolyte flow by gravity force. A mechanical rechargeable Al-air cell is fabricated with low-cost materials including low grade Al, carbon paper, and PMMA plates. An electrolyte waste tank with elaborate channel is designed to separate the hydrogen generated from the corrosion, which would be collected by gas collection device. In the first section of the research work, we investigated the performance of the mechanical rechargeable Al-air cell with a constant flow rate of electrolyte, to ensure the repeatability experiments. Then the whole system was assembled together and the feasibility of operating was demonstrated. During experiment, pure hydrogen is collected by collection device, which holds potential for various applications. By collecting this by-product, high utilization efficiency of aluminum is achieved. Considering both electricity and hydrogen generated, an overall utilization efficiency of around 90 % or even higher under different working voltages are achieved. Fluidic electrolyte could remove aluminum hydroxide precipitate and solve the electrolyte deterioration problem. This operation system provides a low-cost strategy for harvesting energy from the abundant secondary Al. The system could also be applied into other metal-air cells and is suitable for emergency power supply, power plant and other applications. The low cost feature implies great potential for commercialization. Further optimization, such as scaling up and optimization of fabrication, will help to refine the technology into practical market offerings.

Keywords: aluminium-air cell, high efficiency, hydrogen, mechanical recharge

Procedia PDF Downloads 285
153 Factors Affecting Air Surface Temperature Variations in the Philippines

Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya

Abstract:

Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.

Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number

Procedia PDF Downloads 327