Search results for: mobile standards
812 Identification of Persistent Trace Organic Pollutants in Various Waste Water Samples Using HPLC
Authors: Almas Hamid, Ghazala Yaqub, Aqsa Riaz
Abstract:
Qualitative validation was performed to detect the presence of persistent organic pollutants (POPs) in various wastewater samples collected from domestic sources (Askari XI housing society, Bedian road Lahore) industrial sources (PET bottles, pharmaceutical, textile) and a municipal drain (Hudiara drain) in Lahore. In addition wastewater analysis of the selected parameter was carried out. pH for wastewater samples from Askari XI, PET bottles, pharmaceutical, textile and Hudiara drain were 6.9, 6.7, 6.27, 7.18 and 7.9 respectively, within the NEQS Pakistan range that is 6-9. TSS for the respective samples was 194, 241, 254, 140 and 251 mg/L, in effluent for pet bottle industry, pharmaceutical and Hudiara drain and exceeded the NEQS Pakistan. Chemical oxygen demand (COD) for the wastewater samples was 896 mg/L, 166 mg/L, 419 mg/L, 812 mg/L and 610 mg/L respectively, all in excess of NEQS (150 mg/L). Similarly the biological oxygen demand (BOD) values (110.8, 170, 423, 355 and 560 mg/L respectively) were also above NEQS limits (80 mg/L). Chloride (Cl-) content, total dissolved solids (TDS) and temperature were found out to be within the prescribed standard limits. The POPs selected for analysis included five pesticides/insecticides (D. D, Karate, Commando, Finis insect killer, Bifenthrin) and three polycyclic aromatic hydrocarbons (PAHs) (naphthalene, anthracene, phenanthrene). Peak values of standards were compared with that of wastewater samples. The results showed the presence of D.D in all wastewater samples, pesticide Karate was identified in Askari XI and textile industry sample. Pesticide Commando, Finis (insect killer) and Bifenthrin were detected in Askari XI and Hudiara drain wastewater samples. In case of PAHs; naphthalene was identified in all the five wastewater samples whereas anthracene and phenanthrene were detected in samples of Askari XI housing society, PET bottles industry, pharmaceutical industry and textile industry but totally absent in Hudiara drain wastewater. Practical recommendations have been put forth to avoid hazardous impacts of incurred samples.Keywords: HPLC studies, lahore, physicochemical analysis, wastewater
Procedia PDF Downloads 269811 Instrument Development and Validation for Quality Early Childhood Curriculum in the Malaysian Context
Authors: Sadiah Baharom, Che Nidzam Che Ahmad, Saipol Barin Ramli, Asmayati Yahaya, Sopia Md Yassin
Abstract:
The early childhood care and education (ECCE) in Malaysia aspire to develop children who are intellectually, emotionally, physically and spiritually balanced. This aspiration can only materialise if the early childhood program developed comprehensive and is of high quality comparable to international standards. As such, there is a pressing need to assess the quality of the program in an all-encompassing manner. The overall research project aims at developing a comprehensive and integrated model of high-quality Malaysian ECCE. One of the major objectives of this project is to assess and evaluate the scope and quality of the existing ECCE programs in Malaysia. To this end, a specific aspect of this objective is to develop and validate an instrument to assess and evaluate the ECCE curriculum of the country. Thus this paper describes the development and validation of an instrument to explore the quality of early childhood care and education curriculum currently implemented in the country’s ECCE centres. The generation of the constructs and items were based on a set of criteria mapped against existing ECCE practice, document analyses, expert interviews and panel discussions. The items went through expert validation and were field tested on 597 ECCE teachers. The data obtained went through an exploratory factor analysis to validate the constructs of the instrument followed by reliability studies on internal consistency based on the Cronbach Alpha values. The final set of items for the ECCE curriculum instrument, earmarked for the main study, consists of four constructs namely philosophy and core values, curriculum content, curriculum review and unique features. Each construct consists of between 21 to 3 items with a total of 36 items in all. The reliability coefficients for each construct range from 0.65 to 0.961. These values are within the acceptable limits for a reliable instrument to be used in the main study.Keywords: early childhood and care education, instrument development, reliability studies, validity studies
Procedia PDF Downloads 201810 Co-Creating an International Flipped Faculty Development Model: A US-Afghan Case Study
Authors: G. Alex Ambrose, Melissa Paulsen, Abrar Fitwi, Masud Akbari
Abstract:
In 2016, a U.S. business college was awarded a sub grant to work with FHI360, a nonprofit human development organization, to support a university in Afghanistan funded by the State Department’s U.S. Agency for International Development (USAID). A newly designed Master’s Degree in Finance and Accounting is being implemented to support Afghanistan’s goal of 20% females in higher education and industry by 2020 and to use finance and accounting international standards to attract capital investment for economic development. This paper will present a case study to describe the co-construction of an approach to an International Flipped Faculty Development Model grounded in blended learning theory. Like education in general, faculty development is also evolving from the traditional face to face environment and interactions to the fully online and now to a best of both blends. Flipped faculty development is both a means and a model for careful integration of the strengths of the synchronous and asynchronous dynamics and technologies with the combination of intentional sequencing to pre-online interactions that prepares and enhances the face to face faculty development and mentorship residencies with follow-up post-online support. Initial benefits from this model include giving the Afghan faculty an opportunity to experience and apply modern teaching and learning strategies with technology in their own classroom. Furthermore, beyond the technological and pedagogical affordances, the reciprocal benefits gained from the mentor-mentee, face-to-face relationship will be explored. Evidence to support this model includes: empirical findings from pre- and post-Faculty Mentor/ Mentee survey results, Faculty Mentorship group debriefs, Faculty Mentorship contact logs, and student early/end of semester feedback. In addition to presenting and evaluating this model, practical challenges and recommendations for replicating international flipped faculty development partnerships will be provided.Keywords: educational development, faculty development, international development, flipped learning
Procedia PDF Downloads 189809 Urban Corridor Management Strategy Based on Intelligent Transportation System
Authors: Sourabh Jain, Sukhvir Singh Jain, Gaurav V. Jain
Abstract:
Intelligent Transportation System (ITS) is the application of technology for developing a user–friendly transportation system for urban areas in developing countries. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. This paper attempts to present the past studies regarding several ITS available that have been successfully deployed in urban corridors of India and abroad, and to know about the current scenario and the methodology considered for planning, design, and operation of Traffic Management Systems. This paper also presents the endeavor that was made to interpret and figure out the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of 6 lanes as well as 8 lanes divided road network. Two categories of data were collected on February 2016 such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, radar gun, mobile GPS and stopwatch. From analysis, the performance interpretations incorporated were identification of peak hours and off peak hours, congestion and level of service (LOS) at mid blocks, delay followed by the plotting speed contours and recommending urban corridor management strategies. From the analysis, it is found that ITS based urban corridor management strategies will be useful to reduce congestion, fuel consumption and pollution so as to provide comfort and efficiency to the users. The paper presented urban corridor management strategies based on sensors incorporated in both vehicles and on the roads.Keywords: congestion, ITS strategies, mobility, safety
Procedia PDF Downloads 443808 The Bioequivalent: A Medical Drug Search Tool Based on a Collaborative Database
Authors: Rosa L. Figueroa, Joselyn A. Hernández
Abstract:
During the last couple of years, the Ministry of Health have been developing new health policies in order to regulate and improve in benefit of the patient the pharmaceutical system in our country. However, there are still some deficiencies in how medicines have been accessed, distributed, and sold. Therefore, it is necessary to empower the patient by offering new instances to improve access to drug information. This work introduces ‘the bioequivalent’ a medical drug search tool created to increase both diffusion and getting information about the therapeutic equivalence of medicines for the patient. The development of the search tool started with a study on the availability of sources of drug information accessible to the patient where advantages and disadvantages were analyzed. The information obtained was used to feed the functional design of the new tool. The design of the new tool shows an external interface that includes a header, body, sidebar and footer. The header has a menu containing ‘Home,’ ‘Who we are,’ and ‘Mission and vision.’ The Body contains the medical drug search tool, and the Sidebar is for the user logging in. It could be anonym, registered user, as well as, administrator. Anonym user could only use the tool. Registered users could add some information on existing medicines in the database; however, adding information will be restricted and limited to specific items and subject to administrator approval because the information added must be endorsed by the Chilean Public Health Institute. On the other hand, the administrator will have all the privileges, including creating or deleting drugs or information about them. The Bioequivalent was tested on different mobile devices, and no fails have been found. Moreover, a small survey was answered by ten people who tested the tool, and all of them agree that the tool was useful to get information about bioequivalent drugs, and they would recommend the tool to others. Nevertheless, an 80% of people who tested the tool says it was easy to use, and a 70% indicates that additional help is not required. These results are evidence that ‘the Bioequivalent’ may contribute to the knowledge about the therapeutic bioequivalence and bioequivalent drugs existing in Chile. As future work, the tool will be developed to make it available to the public for a first testing stage in a more massive scenario.Keywords: collaborative database, bioequivalent drugs, search tool, web platform
Procedia PDF Downloads 232807 Shared Decision Making in Oropharyngeal Cancer: The Development of a Decision Aid for Resectable Oropharyngeal Carcinoma, a Mixed Methods Study
Authors: Anne N. Heirman, Lisette van der Molen, Richard Dirven, Gyorgi B. Halmos, Michiel W.M. van den Brekel
Abstract:
Background: Due to the rising incidence of oropharyngeal squamous cell cancer (OPSCC), many patients are challenged with choosing between transoral(robotic) surgery and radiotherapy, with equal survival and oncological outcomes. Also, functional outcomes are of little difference over the years. With this study, the wants and needs of patients and caregivers are identified to develop a comprehensible patient decision aid (PDA). Methods: The development of this PDA is based on the International Patient Decision Aid Standards criteria. In phase 1, relevant literature was reviewed and compared to current counseling papers. We interviewed ten post-treatment patients and ten doctors from four head and neck centers in the Netherlands, which were transcribed verbatim and analyzed. With these results, the first draft of the PDA was developed. Phase 2 beholds testing the first draft for comprehensibility and usability. Phase 3 beholds testing for feasibility. After this phase, the final version of the PDA was developed. Results: All doctors and patients agreed a PDA was needed. Phase 1 showed that 50% of patients felt well-informed after standard care and 35% missed information about treatment possibilities. Side effects and functional outcomes were rated as the most important for decision-making. With this information, the first version was developed. Doctors and patients stated (phase 2) that they were satisfied with the comprehensibility and usability, but there was too much text. The PDA underwent text reduction revisions and got more graphics. After revisions, all doctors found the PDA feasible and would contribute to regular counseling. Patients were satisfied with the results and wished they would have seen it before their treatment. Conclusion: Decision-making for OPSCC should focus on differences in side-effects and functional outcomes. Patients and doctors found the PDA to be of great value. Future research will explore the benefits of the PDA in clinical practice.Keywords: head-and-neck oncology, oropharyngeal cancer, patient decision aid, development, shared decision making
Procedia PDF Downloads 144806 Smart Transportation: Bringing Back Sunshine City Harare
Authors: R. Shayamapiki
Abstract:
This study explores the applicability of applying new urbanism principles in cities of developing countries as a panacea towards building sustainable cities through implementing smart transportation. Smart transportation approach to planning has been growing remarkably around the globe in the past decade. In conquest to curb traffic congestion and reducing automobile dependency in the inner-city Harare, Smart Transportation has been a strong drive towards building sustainable cities. Conceptually, Smart Transportation constitutes of principles which include walking, cycling and mass transit. The Smart Transportation approach has been a success story in the cities of developing world but its application in the cities of developing countries has been doubtful. Cities of developing countries being multifaceted with several urban sustainability challenges, the study consolidates that there are no robust policy, legislative and institutional frameworks to govern the application of Smart Transportation in urban planning hence no clear roadway towards its success story. Questions regarding this investigation proliferate to; how capable are cities of developing countries to transform Smart Transportation principles to a success story? What victory can Smart Transportation bring to sustainable urban development? What are constraints of embracing the principles and how can they be manipulated? Methodologically the case study of urban syntax in Harare Central Business District and arterial roads of the city, legislation and institutional settings underpins various research outcomes. The study finds out the hindrances of policy, legislative and institutional incapacities cooked with economic constraints, lack of political will and technically inflexible zoning regulations. The study also elucidates that there is need to adopt a localized approach to Smart Transportation. The paper then calls for strengthening of institutional and legal reform in conquest to embrace the concept, policy and legislative support, feasible financial mechanism, coordination of responsible stakeholders, planning standards and regulatory frameworks reform to celebrate the success story of Smart Transportation in the developing world.Keywords: inner-city Harare, new urbanism, smart transportation, sustainable cities
Procedia PDF Downloads 469805 Promoting Affordable Housing Public-Private Partnerships (PPPs) in Nigeria: Addressing Ethical Concerns in Construction and Exploring Solutions
Authors: Shem Ikoojo Ayegba, Ye Qi
Abstract:
Public-private partnerships (PPPs) can potentially be a transformative mechanism for advancing affordable housing in Nigeria., considering the current housing deficit between 17 – 24 million. Nevertheless, their effectiveness is marred by persistent unethical practices such as corruption and the utilization of subpar materials. Through a comprehensive mixed-methods approach, this study delves into the ethical quandaries within Nigerian housing construction and their cascading effects on the success of PPPs. Semi-structured interviews encompassing seasoned construction professionals and an in-depth content analysis of ongoing housing policies and projects in Nigeria reveal a culture of corruption across the value chain. This malaise is exacerbated by glaring deficiencies in oversight and a lack of transparent practices. A robust statistical survey involving diverse professionals, including engineers, architects, and project managers, echoes these findings, emphasizing that a frail institutional framework facilitates the persistence of substandard material use, professional negligence, and rampant bribery. Such compromised construction standards place residents in potential jeopardy and impede the achievement of broader sustainability objectives. This study propounds a suite of policy interventions to pave the way for thriving affordable housing PPPs: initiating transparent bidding processes, establishing non-negotiable quality benchmarks for construction materials, and incorporating independent third-party audits throughout the building phase. Furthermore, cultivating a culture of professional integrity through targeted ethics training for all construction personnel is imperative. This research furnishes pragmatic strategies that can radically enhance the potency of housing PPPs, thereby ensuring safe, durable, and affordable housing solutions for Nigeria's underserved populace.Keywords: public-private partnerships, affordable housing, unethical practicies, housing policies, construction ethics
Procedia PDF Downloads 79804 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare
Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl
Abstract:
Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval
Procedia PDF Downloads 201803 Modeling of Conjugate Heat Transfer including Radiation in a Kerosene/Air Certification Burner
Authors: Lancelot Boulet, Pierre Benard, Ghislain Lartigue, Vincent Moureau, Nicolas Chauvet, Sheddia Didorally
Abstract:
International aeronautic standards demand a fire certification for engines that demonstrate their resistance. This demonstration relies on tests performed with prototype engines in the late stages of the development. Hardest tests require to place a kerosene standardized flame in front of the engine casing during a given time with imposed temperature and heat flux. The purpose of this work is to provide a better characterization of a kerosene/air certification burner in order to minimize the risks of test failure. A first Large-Eddy Simulation (LES) study of the certification burner permitted to model and simulate this burner, including both adiabatic and Conjugate Heat Transfer (CHT) computations. Carried out on unstructured grids with 40 million tetrahedral cells, using the finite-volume YALES2 code, spray combustion, forced convection on walls and conduction in the solid parts of the burner were coupled to achieve a detailed description of heat transfer. It highlighted the fact that conduction inside the solid has a real impact on the flame topology and the combustion regime. However, in the absence of radiative heat transfer, unrealistic temperature of the equipment was obtained. The aim of the present study is to include the radiative heat transfer in order to reach the same temperature given by experimental measurements. First, various test-cases are conducted to validate the coupling between the different heat solvers. Then, adiabatic case, CHT case, as well as CHT including radiative transfer are studied and compared. The LES model is finally applied to investigate the heat transfer in a flame impaction configuration. The aim is to progress on fire test modeling so as to reach a good confidence level as far as success of the certification test is concerned.Keywords: conjugate heat transfer, fire resistance test, large-eddy simulation, radiative transfer, turbulent combustion
Procedia PDF Downloads 223802 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring
Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover
Abstract:
Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels
Procedia PDF Downloads 126801 A Single Feature Probability-Object Based Image Analysis for Assessing Urban Landcover Change: A Case Study of Muscat Governorate in Oman
Authors: Salim H. Al Salmani, Kevin Tansey, Mohammed S. Ozigis
Abstract:
The study of the growth of built-up areas and settlement expansion is a major exercise that city managers seek to undertake to establish previous and current developmental trends. This is to ensure that there is an equal match of settlement expansion needs to the appropriate levels of services and infrastructure required. This research aims at demonstrating the potential of satellite image processing technique, harnessing the utility of single feature probability-object based image analysis technique in assessing the urban growth dynamics of the Muscat Governorate in Oman for the period 1990, 2002 and 2013. This need is fueled by the continuous expansion of the Muscat Governorate beyond predicted levels of infrastructural provision. Landsat Images of the years 1990, 2002 and 2013 were downloaded and preprocessed to forestall appropriate radiometric and geometric standards. A novel approach of probability filtering of the target feature segment was implemented to derive the spatial extent of the final Built-Up Area of the Muscat governorate for the three years period. This however proved to be a useful technique as high accuracy assessment results of 55%, 70%, and 71% were recorded for the Urban Landcover of 1990, 2002 and 2013 respectively. Furthermore, the Normalized Differential Built – Up Index for the various images were derived and used to consolidate the results of the SFP-OBIA through a linear regression model and visual comparison. The result obtained showed various hotspots where urbanization have sporadically taken place. Specifically, settlement in the districts (Wilayat) of AL-Amarat, Muscat, and Qurayyat experienced tremendous change between 1990 and 2002, while the districts (Wilayat) of AL-Seeb, Bawshar, and Muttrah experienced more sporadic changes between 2002 and 2013.Keywords: urban growth, single feature probability, object based image analysis, landcover change
Procedia PDF Downloads 274800 Nabokov’s Lolita: Externalization of Contemporary Mind in the Configuration of Hedonistic Aesthetics
Authors: Saima Murtaza
Abstract:
Ethics and aesthetics have invariably remained the two closely integrated artistic appurtenances for the production of any work of art. These artistic devices configure themselves into a complex synthesis in our contemporary literature. The labyrinthine integration of ethics and aesthetics, operating in the lives of human characters, to the extent of transcending all limits has resulted in an artistic puzzle for the readers. Art, no doubt, is an extrinsic expression of the intrinsic life of man. The use of aesthetics in literature pertaining to human existence; aesthetic solipsism, has resulted in the artistic objectification of these characters. The practice of the like aestheticism deprives the characters of their souls, rendering them as mere objects of aesthetic gaze at the hands of their artists-creators. Artists orchestrate their lives founding it on a plot which deviates from normal social and ethical standards. Their perverse attitude can be seen in dealing with characters, their feelings and the incidents of their lives. Morality is made to appear not as a religious construct but as an individual’s private affair. Furthermore, the idea of beauty incarnated, in other words hedonistic aesthetic does not placate a true aesthete. Ethics and aesthetics are the two most recurring motifs of our contemporary literature, especially of Nabokov’s world. The purpose of this study is to peruse these aforementioned motifs in Nabokov’s most enigmatic novel Lolita, a story of pedophilia, which is in fact reflective of our complex individual psychic and societal patterns. The narrative subverts all the traditional and hitherto known notions of aesthetics and ethics. When applied to literature, aesthetic does not simply mean ‘beautiful’ in the text. It refers to an intricate relationship between feelings and perception and also incorporates within its range wide-ranging emotional reactions to text. The term aesthetics in literature is connected with the readers whose critical responses to the text determine the merit of any work to be really a piece of art. Aestheticism is the child of ethics. Morality sets the grounds for the production of any work and the idea of aesthetics gives it transcendence.Keywords: ethics, aesthetics and hedonistic aesthetic, nymphet syndrome, pedophilia
Procedia PDF Downloads 158799 Conceptual Framework of Continuous Academic Lecturer Model in Islamic Higher Education
Authors: Lailial Muhtifah, Sirtul Marhamah
Abstract:
This article forwards the conceptual framework of continuous academic lecturer model in Islamic higher education (IHE). It is intended to make a contribution to the broader issue of how the concept of excellence can promote adherence to standards in higher education and drive quality enhancement. This model reveals a process and steps to increase performance and achievement of excellence regular lecturer gradually. Studies in this model are very significant to realize excellence academic culture in IHE. Several steps were identified from previous studies through literature study and empirical findings. A qualitative study was conducted at institute. Administrators and lecturers were interviewed, and lecturers learning communities observed to explore institute culture policies, and procedures. The original in this study presents and called Continuous Academic Lecturer Model (CALM) with its components, namely Standard, Quality, and Excellent as the basis for this framework (SQE). Innovation Excellence Framework requires Leaders to Support (LS) lecturers to achieve a excellence culture. So, the model named CALM-SQE+LS. Several components of performance and achievement of CALM-SQE+LS Model should be disseminated and cultivated to all lecturers in university excellence in terms of innovation. The purpose of this article is to define the concept of “CALM-SQE+LS”. Originally, there were three components in the Continuous Academic Lecturer Model i.e. standard, quality, and excellence plus leader support. This study is important to the community as specific cases that may inform educational leaders on mechanisms that may be leveraged to ensure successful implementation of policies and procedures outline of CALM with its components (SQE+LS) in institutional culture and professional leader literature. The findings of this study learn how continuous academic lecturer is part of a group's culture, how it benefits in university. This article blends the available criteria into several sub-component to give new insights towards empowering lecturer the innovation excellence at the IHE. The proposed conceptual framework is also presented.Keywords: continuous academic lecturer model, excellence, quality, standard
Procedia PDF Downloads 201798 Inelastic and Elastic Taping in Plantar Pressure of Runners Pronators: Clinical Trial
Authors: Liana Gomide, Juliana Rodrigues
Abstract:
The morphology of the foot defines its mode of operation and a biomechanical reform indispensable for a symmetrical distribution of plantar pressures in order not to overload some of its components in isolation. High plantar pressures at specific points in the foot may be a causal factor in several orthopedic disorders that affect the feet such as pain and stress fracture. With digital baro-podometry equipment one can observe an intensity of pressures along the entire foot and quantify some of the movements, such as a subtalar pronation present in the midfoot region. Although, they are involved in microtraumas. In clinical practice, excessive movement has been limited with the use of different taping techniques applied on the plantar arch. Thus, the objective of the present study was to analyze and compare the influence of the inelastic and elastic taping on the distribution of plantar pressure of runners pronators. This is a randomized clinical trial and blind-crossover. Twenty (20) male subjects, mean age 33 ± 7 years old, mean body mass of 71 ± 7 kg, mean height of 174 ± 6 cm, were included in the study. A data collection was carried out by a single research through barop-odometry equipment - Tekscan, model F-scan mobile. The tests were performed at three different times. In the first, an initial barop-odometric evaluation was performed, without a bandage application, with edges at a speed of 9.0 km/h. In the second and third moments, the inelastic or elastic taping was applied consecutively, according to the definition defined in the randomization. As results, it was observed that both as inelastic and elastic taping, provided significant reductions in contact pressure and peak pressure values when compared to the moment without a taping. However, an elastic taping was more effective in decreasing contact pressure (no bandage = 714 ± 201, elastic taping = 690 ± 210 and inelastic taping = 716 ± 180) and no peak pressure in the midfoot region (no bandage = 1490 ± 42, elastic taping = 1273 ± 323 and inelastic taping = 1487 ± 437). It is possible to conclude that it is an elastic taping provided by pressure in the middle region, thereby reducing the subtalar pronunciation event during the run.Keywords: elastic taping, inelastic taping, running, subtalar pronation
Procedia PDF Downloads 156797 Treatment of Non-Small Cell Lung Cancer (NSCLC) With Activating Mutations Considering ctDNA Fluctuations
Authors: Moiseenko F. V., Volkov N. M., Zhabina A. S., Stepanova E. O., Kirillov A. V., Myslik A. V., Artemieva E. V., Agranov I. R., Oganesyan A. P., Egorenkov V. V., Abduloeva N. H., Aleksakhina S. Yu., Ivantsov A. O., Kuligina E. S., Imyanitov E. N., Moiseyenko V. M.
Abstract:
Analysis of ctDNA in patients with NSCLC is an emerging biomarker. Multiple research efforts of quantitative or at least qualitative analysis before and during the first periods of treatment with TKI showed the prognostic value of ctDNA clearance. Still, these important results are not incorporated in clinical standards. We evaluated the role of ctDNA in EGFR-mutated NSCLC receiving first-line TKI. Firstly, we analyzed sequential plasma samples from 30 patients that were collected before intake of the first tablet (at baseline) and at 6, 12, 24, 36, and 48 hours after the “starting point.” EGFR-M+ allele was measured by ddPCR. Afterward, we included sequential qualitative analysis of ctDNA with cobas® EGFR Mutation Test v2 from 99 NSCLC patients before the first dose, after 2 and 4 months of treatment, and on progression. Early response analysis showed the decline of EGFR-M+ level in plasma within the first 48 hours of treatment in 11 subjects. All these patients showed objective tumor response. 10 patients showed either elevation of EGFR-M+ plasma concentration (n = 5) or stable content of circulating EGFR-M+ after the start of the therapy (n = 5); only 3 of these patients achieved an objective response (p = 0.026) when compared to the former group). The rapid decline of plasma EGFR-M+ DNA concentration also predicted for longer PFS (13.7 vs. 11.4 months, p = 0.030). Long-term ctDNA monitoring showed clinically significant heterogeneity of EGFR-mutated NSCLC treated with 1st line TKIs in terms of progression-free and overall survival. Patients without detectable ctDNA at baseline (N = 32) possess the best prognosis on the duration of treatment (PFS: 24.07 [16.8-31.3] and OS: 56.2 [21.8-90.7] months). Those who achieve clearance after two months of TKI (N = 42) have indistinguishably good PFS (19.0 [13.7 – 24.2]). Individuals who retain ctDNA after 2 months (N = 25) have the worst prognosis (PFS: 10.3 [7.0 – 13.5], p = 0.000). 9/25 patients did not develop ctDNA clearance at 4 months with no statistical difference in PFS from those without clearance at 2 months. Prognostic heterogeneity of EGFR-mutated NSCLC should be taken into consideration in planning further clinical trials and optimizing the outcomes of patients.Keywords: NSCLC, EGFR, targeted therapy, ctDNA, prognosis
Procedia PDF Downloads 54796 Increased Seedling Vigor Through Phytohomeopathy
Authors: Jasper Jose Zanco
Abstract:
Plants are affected by substances diluted below certain limits. In seeds subjected to ultra-high dilutions (UHD), according to phytohomeopathic methods, it is possible to reduce the concentrations to infinitesimal levels and the effects persist. This research aimed to test different potencies of UHD to modify the vigor of Eruca versicaria (L) Cav. seedlings. The research was carried out at the Plant Production Laboratory of UNISUL University in Santa Catarina, Brazil. Eight UHD treatments were tested, four drops for every 30 mL of distilled water: Control (70% alcohol - A70); Sulphur (S9), Acidum fluoridricum (A30), Calcarea carbonica (C200), Graphies naturalis (G200), Kali carbonicum (K100) Belladonna (B12), diluted and succussed in Hahnemannian centesimal standards. Succussion is a standard pharmaceutical method found in worldwide pharmaceuticals. The statistical design consisted of 50 seeds every 4 replicates per treatment, completely randomized, followed by ANOVA and Tukey's test. Succussion may integrate the high dilution of water treatments, even after successive dilutions, and the product of this process acts through physical-chemical and bioelectric stimuli, causing physiological responses at the cellular level, such as the activation of antioxidant systems, increased resistance to environmental stress or growth modulation. According to some researchers, these responses could be mediated by genetic expression changes or the plants' cellular signaling systems. The results showed significant differences between the control (A70) and the other treatments. Conductivity measurements were made in the seed germination water and impedance; the seedlings were measured for dry weight and total area. The highest conductivity occurred in the control treatment (27.8 μS/cm) and the lowest in K100 (21.3 μS/cm). After germination, on germitest paper, A70 was significantly different from G200 (<1%) and S9 (5%). Both homeopathies differed from the other treatments, with S9 obtaining the best germination (87.1%) and vigor index (IV=7.98) in relation to the other treatments. The control, A70, presented the lowest germination (63.9%) and vigor (IV=4.93).Keywords: ultra high dilution, impedance, condutivity, eruca versicaria
Procedia PDF Downloads 18795 Evaluating and Improving Healthcare Staff Knowledge of the [NG179] NICE Guidelines on Elective Surgical Care during the COVID-19 Pandemic: A Quality Improvement Project
Authors: Stavroula Stavropoulou-Tatla, Danyal Awal, Mohammad Ayaz Hossain
Abstract:
The first wave of the COVID-19 pandemic saw several countries issue guidance postponing all non-urgent diagnostic evaluations and operations, leading to an estimated backlog of 28 million cases worldwide and over 4 million in the UK alone. In an attempt to regulate the resumption of elective surgical activity, the National Institute for Health and Care Excellence (NICE) introduced the ‘COVID-19 rapid guideline [NG179]’. This project aimed to increase healthcare staff knowledge of the aforementioned guideline to a targeted score of 100% in the disseminated questionnaire within 3 months at the Royal Free Hospital. A standardized online questionnaire was used to assess the knowledge of surgical and medical staff at baseline and following each 4-week-long Plan-Study-Do-Act (PDSA) cycle. During PDSA1, the A4 visual summary accompanying the guideline was visibly placed in all relevant clinical areas and the full guideline was distributed to the staff in charge together with a short briefing on the salient points. PDSA2 involved brief small-group teaching sessions. A total of 218 responses was collected. Mean percentage scores increased significantly from 51±19% at baseline to 81±16% after PDSA1 (t=10.32, p<0.0001) and further to 93±8% after PDSA2 (t=4.9, p<0.0001), with 54% of participants achieving a perfect score. In conclusion, the targeted distribution of guideline printouts and visual aids, combined with small-group teaching sessions, were simple and effective ways of educating healthcare staff about the new standards of elective surgical care at the time of COVID-19. This could facilitate the safe restoration of surgical activity, which is critical in order to mitigate the far-reaching consequences of surgical delays on an unprecedented scale during a time of great crisis and uncertainty.Keywords: COVID-19, elective surgery, NICE guidelines, quality improvement
Procedia PDF Downloads 195794 Design of Traffic Counting Android Application with Database Management System and Its Comparative Analysis with Traditional Counting Methods
Authors: Muhammad Nouman, Fahad Tiwana, Muhammad Irfan, Mohsin Tiwana
Abstract:
Traffic congestion has been increasing significantly in major metropolitan areas as a result of increased motorization, urbanization, population growth and changes in the urban density. Traffic congestion compromises efficiency of transport infrastructure and causes multiple traffic concerns; including but not limited to increase of travel time, safety hazards, air pollution, and fuel consumption. Traffic management has become a serious challenge for federal and provincial governments, as well as exasperated commuters. Effective, flexible, efficient and user-friendly traffic information/database management systems characterize traffic conditions by making use of traffic counts for storage, processing, and visualization. While, the emerging data collection technologies continue to proliferate, its accuracy can be guaranteed through the comparison of observed data with the manual handheld counters. This paper presents the design of tablet based manual traffic counting application and framework for development of traffic database management system for Pakistan. The database management system comprises of three components including traffic counting android application; establishing online database and its visualization using Google maps. Oracle relational database was chosen to develop the data structure whereas structured query language (SQL) was adopted to program the system architecture. The GIS application links the data from the database and projects it onto a dynamic map for traffic conditions visualization. The traffic counting device and example of a database application in the real-world problem provided a creative outlet to visualize the uses and advantages of a database management system in real time. Also, traffic data counts by means of handheld tablet/ mobile application can be used for transportation planning and forecasting.Keywords: manual count, emerging data sources, traffic information quality, traffic surveillance, traffic counting device, android; data visualization, traffic management
Procedia PDF Downloads 193793 An Enzyme Technology - Metnin™ - Enables the Full Replacement of Fossil-Based Polymers by Lignin in Polymeric Composites
Authors: Joana Antunes, Thomas Levée, Barbara Radovani, Anu Suonpää, Paulina Saloranta, Liji Sobhana, Petri Ihalainen
Abstract:
Lignin is an important component in the exploitation of lignocellulosic biomass. It has been shown that within the next years, the yield of added-value lignin-based chemicals and materials will generate renewable alternatives to oil-based products (e.g. polymeric composites, resins and adhesives) and enhance the economic feasibility of biorefineries. In this paper, a novel technology for lignin valorisation (METNIN™) is presented. METNIN™ is based on the oxidative action of an alkaliphilic enzyme in aqueous alkaline conditions (pH 10-11) at mild temperature (40-50 °C) combined with a cascading membrane operation, yielding a collection of lignin fractions (from oligomeric down to mixture of tri-, di- and monomeric units) with distinct molecular weight distribution, low polydispersity and favourable physicochemical properties. The alkaline process conditions ensure the high processibility of crude lignin in an aqueous environment and the efficiency of the enzyme, yielding better compatibility of lignin towards targeted applications. The application of a selected lignin fraction produced by METNIN™ as a suitable lignopolyol to completely replace a commercial polyol in polyurethane rigid foam formulations is presented as a prototype. Liquid lignopolyols with a high lignin content were prepared by oxypropylation and their full utilization in the polyurethane rigid foam formulation was successfully demonstrated. Moreover, selected technical specifications of different foam demonstrators were determined, including closed cell count, water uptake and compression characteristics. These specifications are within industrial standards for rigid foam applications. The lignin loading in the lignopolyol was a major factor determining the properties of the foam. In addition to polyurethane foam demonstrators, other examples of lignin-based products related to resins and sizing applications will be presented.Keywords: enzyme, lignin valorisation, polyol, polyurethane foam
Procedia PDF Downloads 153792 Perception of Customers towards Service Quality: A Comparative Analysis of Organized and Unorganised Retail Stores (with Special Reference to Bhopal City)
Authors: Abdul Rashid, Varsha Rokade
Abstract:
Service Quality within retail units is pivotal for satisfying customers and retaining them. This study on customer perception towards Service Quality variables in Retail aims to identify the dimensions and their impact on customers. An analytical study of the different retail service quality variables was done to understand the relationship between them. The study tries exploring the factors that attract the customers towards the organised and unorganised retail stores in the capital city of Madhya Pradesh, India. As organised retailers are seen as offering similar products in the outlets, improving service quality is seen as critical to ensuring competitive advantage over unorganised retailers. Data were collected through a structured questionnaire on a five-point Likert scale from existing walk-in customers of selected organised and unorganised retail stores in Bhopal City of Madhya Pradesh, India. The data was then analysed by factor analysis using (SPSS) Statistical Package for the Social Sciences especially Percentage analysis, ANOVA and Chi-Square. This study tries to find interrelationship between various Retail Service Quality dimensions, which will help the retailers to identify the steps needed to improve the overall quality of service. Thus, the findings of the study prove to be helpful in understanding the service quality variables which should be considered by organised and unorganised retail stores in Capital city of Madhya Pradesh, India.Also, findings of this empirical research reiterate the point of view that dimensions of Service Quality in Retail play an important role in enhancing customer satisfaction – a sector with high growth potential and tremendous opportunities in rapidly growing economies like India’s. With the introduction of FDI in multi-brand retailing, a large number of international retail players are expected to enter the Indian market, this intern will bring more competition in the retail sector. For benchmarking themselves with global standards, the Indian retailers will have to improve their service quality.Keywords: organized retail, unorganised retail, retail service quality, service quality dimension
Procedia PDF Downloads 230791 Atherosclerosis Prevalence Within Populations of the Southeastern United States
Authors: Samuel P. Prahlow, Anthony Sciuva, Katherine Bombly, Emily Wilson, Shiv Dhiman, Savita Arya
Abstract:
A prevalence cohort study of atherosclerotic lesions within cadavers was performed to better understand and characterize the prevalence of atherosclerosis among Georgia residents within body donors in the Philadelphia College of Osteopathic Medicine (PCOM) - Georgia body donor program. We procured specimens from cadavers used for medical students, physical therapy students, and biomedical science students cadaveric anatomical dissection at PCOM - South Georgia and PCOM - Georgia. Tissues were prepared using hematoxylin and eosin (H&E) stainas histological slides by Colquitt Regional Medical Center Laboratory Services. One section from each of the following arteries was taken after cadaveric dissection at the site of most calcification palpated grossly (if present): left anterior descending coronary artery, left internal carotid artery, abdominal aorta, splenic artery, and hepatic artery. All specimens were graded and categorized according to the American Heart Association’s Modified and Conventional Standards for Atherosclerotic Lesions using x4, x10, x40 microscopic magnification. Our study cohort included 22 cadavers, with 16 females and 6 males. The average age was 72.54, and the median age was 72, with a range of 52 to 90 years old. The cause of death determination listing vascular and/or cardiovascular causes was present on 6 of the 22 death certificates. 19 of 22 (86%) cadavers had at least a single artery grading > 5. Of the cadavers with at least a single artery graded at greater than 5, only 5 of 19 (26%) cadavers had a vascular or cardiovascular cause of death reported. Malignancy was listed as a cause of death on 7 (32%) death certificates. The average atherosclerosis grading of the common hepatic, splenic and left internal carotid arteries (2.15, 3.05, and 3.36 respectively) were lower than the left anterior descending artery and the abdominal aorta (5.16 and 5.86 respectively). This prevalence study characterizes atherosclerosis found in five medium and large systemic arteries within cadavers from the state of Georgia.Keywords: pathology, atherosclerosis, histology, cardiovascular
Procedia PDF Downloads 215790 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 127789 The Social Impact of Green Buildings
Authors: Elise Machline
Abstract:
Policy instruments have been developed worldwide to reduce the energy demand of buildings. Two types of such instruments have been green building rating systems and energy efficiency standards for buildings -such as Green Star (Australia), LEED (United States, Leadership in Energy and Environmental Design), Energy Star (United States), and BREEAM (United Kingdom, Building Research Establishment Environmental Assessment Method). The popularity of the idea of sustainable development has allowed the actors to consider the potential value generated by the environmental performance of buildings, labeled “green value” in the literature. Sustainable performances of buildings are expected to improve their attractiveness, increasing their value. A growing number of empirical studies demonstrate that green buildings yield rental/sale premia, as well as higher occupancy rates and thus higher asset values. The results suggest that green buildings are not affordable to all and that their construction tends to have a gentrifying effect. An increasing number of countries are institutionalizing green strategies for affordable housing. In that sense, making green buildings affordable to all will depend on government policies. That research aims to investigate whether green building fosters inequality in Israel, under the banner of sustainability. The method is comparison (of the market value). This method involves comparing the green buildings sale prices with non-certified buildings of the same type that have undergone recent transactions. The “market value” is deduced from those sources by analogy. The results show that, in Israel, green building projects are usually addressed to the middle to upper classes. The green apartment’s sale premium is about 19% (comparing to non-certified dwelling). There is a link between energy and/or environmental performance and the financial value of the dwellings. Moreover, price differential is much higher than the value of energy savings. This perpetuates socio-spatial and socio-economic inequality as well as ecological vulnerability for the poor and other socially marginal groups. Moreover, there are no green affordable housings and the authorities do not subsidy green building or retrofitting.Keywords: green building, gentrification, social housing, green value, green building certification
Procedia PDF Downloads 419788 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action
Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere
Abstract:
Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results
Procedia PDF Downloads 133787 Phytoremediation Potential of Enhanced Tobacco BAC F3 in Soil Contaminated with Heavy Metals
Authors: Violina Angelova
Abstract:
A comparative study has been carried out into the impact of organic meliorants on the uptake of heavy metals, micro and macroelements and the phytoremediation potential of enhanced tobacco BAC F3. The soil used as part of this experiment was sampled from the vicinity of the Non-Ferrous-Metal Works near Plovdiv, Bulgaria. The pot experiment carried out consisted of a randomized, complete block design containing nine treatments and three replications (27 pots). The treatments consisted of a control (with no organic meliorants) and compost and vermicompost meliorants (added at 5%, 10%, 15%, and 30%, and recalculated based on their dry soil weight). Upon reaching commercial ripeness, the tobacco plants were gathered. Heavy metals, micro and macroelement contents in roots, stems, and leaves of tobacco were analyzed by the method of the microwave mineralization. To determine the elements in the samples, inductively coupled emission spectrometry (Jobin Yvon Emission - JY 38 S, France) was used. The distribution of the heavy metals, micro, and macroelements in the organs of the enhanced tobacco has a selective character and depended above all on the parts of the plants and the element that was examined. Pb, Zn, Cu, Fe, Mn, P and Mg distribution in tobacco decreases in the following order: roots > leaves > stems, and for Cd, K, and Ca - leaves > roots > stems. The high concentration of Cd in the leaves and the high translocation factor indicate the possibility of enhanced tobacco to be used in phytoextraction. Tested organic amendments significantly influenced the uptake of heavy metals, micro and macroelements by the roots, stems, and leaves of tobacco. A correlation was found between the quantity of the mobile forms and the uptake of Pb, Zn, and Cd by the enhanced tobacco. The compost and vermicompost treatments significantly reduced heavy metals concentration in leaves and increased uptake of K, Ca and Mg. The 30% compost and 30% vermicompost treatments led to the maximal reduction of heavy metals in enhanced tobacco BAC F3. The addition of compost and vermicompost further reduces the ability to digest the heavy metals in the leaves, and phytoremediation potential of enhanced tobacco BAC F3. Acknowledgment: The financial support by the Bulgarian National Science Fund Project DFNI Н04/9 is greatly appreciated.Keywords: heavy metals, micro and macroelements, enhanced tobacco BAC F3, phytoremediation, organic meliorants
Procedia PDF Downloads 156786 Evaluation of Alternative Energy Sources for Energy Production in Turkey
Authors: Naci Büyükkaracığan, Murat Ahmet Ökmen
Abstract:
In parallel with the population growth rate, the need of human being for energy sources in the world is gradually increasing incessant. The addition of this situation that demand for energy will be busier in the future, industrialization, the rise in living standards and technological developments, especially in developing countries. Alternative energy sources have aroused interest due to reasons such as serious environmental issues that were caused by fossil energy sources, potentially decreasing reserves, different social, political and economic problems caused by dependency on source providing countries and price instability. Especially in developed countries as European countries and also U.S.A particularly, alternative energy sources such as wind, geothermal, solar and biomass energy, hydrolic and hydrogen have been utilized in different forms, especially in electricity production. It includes a review of technical and environmental factors for energy sources that are potential replacements for fossil fuels and examines their fitness to supply the energy for a high standard of living on a worldwide basis. Despite all developments, fossil energy sources have been overwhelmingly used all around the world in primary energy sources consumption and they will outnumber other energy sources in the short term. Today, parallel to population growth and economy in Turkey, energy sources consumption is increasingly continuing. On one side, Turkey, currently 80% dependent on energy providing countries, has been heavily conducting fossil energy sources raw material quest within its own borders in order to lower the percentage, and the other side, there have been many researches for exploring potential of alternative energy sources and utilization. This case will lead to both a decrease in foreign energy dependency and a variety of energy sources. This study showed the current energy potential of Turkey and presents historical development of these energy sources and their share in electricity production. The research also seeked for answers to arguments that if the potential can be sufficient in the future. As a result of this study, it was concluded that observed geothermal energy, particularly active tectonic regions of Turkey, to have an alternative energy potential could be considered to be valuable on bass wind and solar energy.Keywords: alternative energy sources, energy productions, hydroenergy, solar energy, wind energy
Procedia PDF Downloads 630785 A Comparative Analysis of Various Companding Techniques Used to Reduce PAPR in VLC Systems
Authors: Arushi Singh, Anjana Jain, Prakash Vyavahare
Abstract:
Recently, Li-Fi(light-fiedelity) has been launched based on VLC(visible light communication) technique, 100 times faster than WiFi. Now 5G mobile communication system is proposed to use VLC-OFDM as the transmission technique. The VLC system focused on visible rays, is considered for efficient spectrum use and easy intensity modulation through LEDs. The reason of high speed in VLC is LED, as they flicker incredibly fast(order of MHz). Another advantage of employing LED is-it acts as low pass filter results no out-of-band emission. The VLC system falls under the category of ‘green technology’ for utilizing LEDs. In present scenario, OFDM is used for high data-rates, interference immunity and high spectral efficiency. Inspite of the advantages OFDM suffers from large PAPR, ICI among carriers and frequency offset errors. Since, the data transmission technique used in VLC system is OFDM, the system suffers the drawbacks of OFDM as well as VLC, the non-linearity dues to non-linear characteristics of LED and PAPR of OFDM due to which the high power amplifier enters in non-linear region. The proposed paper focuses on reduction of PAPR in VLC-OFDM systems. Many techniques are applied to reduce PAPR such as-clipping-introduces distortion in the carrier; selective mapping technique-suffers wastage of bandwidth; partial transmit sequence-very complex due to exponentially increased number of sub-blocks. The paper discusses three companding techniques namely- µ-law, A-law and advance A-law companding technique. The analysis shows that the advance A-law companding techniques reduces the PAPR of the signal by adjusting the companding parameter within the range. VLC-OFDM systems are the future of the wireless communication but non-linearity in VLC-OFDM is a severe issue. The proposed paper discusses the techniques to reduce PAPR, one of the non-linearities of the system. The companding techniques mentioned in this paper provides better results without increasing the complexity of the system.Keywords: non-linear companding techniques, peak to average power ratio (PAPR), visible light communication (VLC), VLC-OFDM
Procedia PDF Downloads 285784 Dietary Micronutritient and Health among Youth in Algeria
Authors: Allioua Meryem
Abstract:
Similar to much of the developing world, Algeria is currently undergoing an epidemiological transition. While mal- and under-nutrition and infectious diseases used to be the main causes of poor health, today there is a higher proportion of chronic, non-communicable diseases (NCDs), including cardiovascular disease, diabetes mellitus, cancer, etc. According to estimates for Algeria from the World Health Organization (WHO), NCDs accounted for 63% of all deaths in 2010. The objective of this study was the assessment of eating habits and anthropometric characteristics in a group of youth aged 15 to 19 years in Tlemcen. This study was conducted on a total effective of 806 youth enrolled in a descriptive cross-sectional study; the classification of nutritional status has been established by international standards IOTF, youth were defined as obese if they had a BMI ≥ 95th percentile, and youth with 85th ≤ BMI ≤ 95th percentile were defined as overweight. Wc is classified by the criteria HD, Wc with moderate risk ≥ 90th percentile and Wc with high risk ≥ 95th percentile. The dietary assessment was based on a 24-hour dietary recall assisted by food records. USDA’S nutrient database for Nutrinux® program was used to analyze dietary intake. Nutrients adequacy ratio was calculated by dividing daily individual intake to dietary recommended intake DRI for each nutrient. 9% of the population was overweight, 3% was obese, 7.5% had abdominal obesity, foods eaten in moderation are chips, cookies, chocolate 1-3 times/day and increased consumption of fried foods in the week, almost half of youth consume sugary drinks more than 3 times per week, we observe a decreased intake of energy, protein (P < 0.001, P = 0.003), SFA (P = 0.018), the NAR of phosphorus, iron, magnesium, vitamin B6, vitamin E, folate, niacin, and thiamin reflecting less consumption of fruit, vegetables, milk, and milk products. Youth surveyed have eating habits at risk of developing obesity and chronic disease.Keywords: food intake, health, anthropometric characteristics, Algeria
Procedia PDF Downloads 540783 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector
Authors: Mariam Vardiashvili
Abstract:
The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity. When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.Keywords: cash-generating assets, non-cash-generating assets, recoverable (usable restorative) value, value of use
Procedia PDF Downloads 143