Search results for: ArcGIS base maps
784 Assessment of Climate Change Impact on Meteorological Droughts
Authors: Alireza Nikbakht Shahbazi
Abstract:
There are various factors that affect climate changes; drought is one of those factors. Investigation of efficient methods for estimating climate change impacts on drought should be assumed. The aim of this paper is to investigate climate change impacts on drought in Karoon3 watershed located south-western Iran in the future periods. The atmospheric general circulation models (GCM) data under Intergovernmental Panel on Climate Change (IPCC) scenarios should be used for this purpose. In this study, watershed drought under climate change impacts will be simulated in future periods (2011 to 2099). Standard precipitation index (SPI) as a drought index was selected and calculated using mean monthly precipitation data in Karoon3 watershed. SPI was calculated in 6, 12 and 24 months periods. Statistical analysis on daily precipitation and minimum and maximum daily temperature was performed. LRAS-WG5 was used to determine the feasibility of future period's meteorological data production. Model calibration and verification was performed for the base year (1980-2007). Meteorological data simulation for future periods under General Circulation Models and climate change IPCC scenarios was performed and then the drought status using SPI under climate change effects analyzed. Results showed that differences between monthly maximum and minimum temperature will decrease under climate change and spring precipitation shall increase while summer and autumn rainfall shall decrease. The precipitation occurs mainly between January and May in future periods and summer or autumn precipitation decline and lead up to short term drought in the study region. Normal and wet SPI category is more frequent in B1 and A2 emissions scenarios than A1B.Keywords: climate change impact, drought severity, drought frequency, Karoon3 watershed
Procedia PDF Downloads 243783 Nanofluid-Based Emulsion Liquid Membrane for Selective Extraction and Separation of Dysprosium
Authors: Maliheh Raji, Hossein Abolghasemi, Jaber Safdari, Ali Kargari
Abstract:
Dysprosium is a rare earth element which is essential for many growing high-technology applications. Dysprosium along with neodymium plays a significant role in different applications such as metal halide lamps, permanent magnets, and nuclear reactor control rods preparation. The purification and separation of rare earth elements are challenging because of their similar chemical and physical properties. Among the various methods, membrane processes provide many advantages over the conventional separation processes such as ion exchange and solvent extraction. In this work, selective extraction and separation of dysprosium from aqueous solutions containing an equimolar mixture of dysprosium and neodymium by emulsion liquid membrane (ELM) was investigated. The organic membrane phase of the ELM was a nanofluid consisting of multiwalled carbon nanotubes (MWCNT), Span80 as surfactant, Cyanex 272 as carrier, kerosene as base fluid, and nitric acid solution as internal aqueous phase. Factors affecting separation of dysprosium such as carrier concentration, MWCNT concentration, feed phase pH and stripping phase concentration were analyzed using Taguchi method. Optimal experimental condition was obtained using analysis of variance (ANOVA) after 10 min extraction. Based on the results, using MWCNT nanofluid in ELM process leads to increase the extraction due to higher stability of membrane and mass transfer enhancement and separation factor of 6 for dysprosium over neodymium can be achieved under the optimum conditions. Additionally, demulsification process was successfully performed and the membrane phase reused effectively in the optimum condition.Keywords: emulsion liquid membrane, MWCNT nanofluid, separation, Taguchi method
Procedia PDF Downloads 288782 Role of Vision Centers in Eliminating Avoidable Blindness Caused Due to Uncorrected Refractive Error in Rural South India
Authors: Ranitha Guna Selvi D, Ramakrishnan R, Mohideen Abdul Kader
Abstract:
Purpose: To study the role of Vision centers in managing preventable blindness through refractive error correction in Rural South India. Methods: A retrospective analysis of patients attending 15 Vision centers in Rural South India from a period of January 2021 to December 2021 was done. Medical records of 10,85,81 patients both new and reviewed, 79,562 newly registered patients and 29,019 review patient’s from15 Vision centers were included for data analysis. All the patients registered at the vision center underwent basic eye examination, including visual acuity, IOP measurement, Slit-lamp examination, retinoscopy, Fundus examination etc. Results: A total of 1,08,581 patients were included in the study. Of the total 1,08,581 patients, 79,562 were newly registered patients at Vision center and 29,019 were review patients. Males were 52,201(48.1%) and Females were 56,308(51.9) among them. The mean age of all examined patients was 41.03 ± 20.9 years (Standard deviation) and ranged from 01 – 113 years. Presenting mean visual acuity was 0.31 ± 0.5 in the right eye and 0.31 ± 0.4 in the left eye. Of the 1,08,581 patients 22,770 patients had refractive error in right eye and 22,721 patients had uncorrected refractive error in left eye. Glass prescription was given to 17,178 (15.8%) patients. 8,109 (7.5%) patients were referred to the base hospital for specialty clinic expert opinion or for cataract surgery. Conclusion: Vision center utilizing teleconsultation for comprehensive eye screening unit is a very effective tool in reducing the avoidable visual impairment caused due to uncorrected refractive error. Vision Centre model is believed to be efficient as it facilitates early detection and management of uncorrected refractive errors.Keywords: refractive error, uncorrected refractive error, vision center, vision technician, teleconsultation
Procedia PDF Downloads 143781 Carbohydrate-Based Recommendations as a Basis for Dietary Guidelines
Authors: A. E. Buyken, D. J. Mela, P. Dussort, I. T. Johnson, I. A. Macdonald, A. Piekarz, J. D. Stowell, F. Brouns
Abstract:
Recently a number of renewed dietary guidelines have been published by various health authorities. The aim of the present work was 1) to review the processes (systematic approach/review, inclusion of public consultation) and methodological approaches used to identify and select the underpinning evidence base for the established recommendations for total carbohydrate (CHO), fiber and sugar consumption, and 2) examine how differences in the methods and processes applied may have influenced the final recommendations. A search of WHO, US, Canada, Australia and European sources identified 13 authoritative dietary guidelines with the desired detailed information. Each of these guidelines was evaluated for its scientific basis (types and grading of the evidence) and the processes by which the guidelines were developed Based on the data retrieved the following conclusions can be drawn: 1) Generally, a relatively high total CHO and fiber intake and limited intake of sugars (added or free) is recommended. 2) Even where recommendations are quite similar, the specific, justifications for quantitative/qualitative recommendations differ across authorities. 3) Differences appear to be due to inconsistencies in underlying definitions of CHO exposure and in the concurrent appraisal of CHO-providing foods and nutrients as well the choice and number of health outcomes selected for the evidence appraisal. 4) Differences in the selected articles, time frames or data aggregation method appeared to be of rather minor influence. From this assessment, the main recommendations are for: 1) more explicit quantitative justifications for numerical guidelines and communication of uncertainty; and 2) greater international harmonization, particularly with regard to underlying definitions of exposures and range of relevant nutrition-related outcomes.Keywords: carbohydrates, dietary fibres, dietary guidelines, recommendations, sugars
Procedia PDF Downloads 258780 Anticancer Effect of Resveratrol-Loaded Gelatin Nanoparticles in NCI-H460 Non-Small Cell Lung Carcinoma Cell Lines
Authors: N. Rajendra Prasad
Abstract:
Resveratrol (RSV), a grape phytochemical, has drawn greater attention because of its beneficial ef-fects against cancer. However, RSV has some draw-backs such as unstabilization, poor water solubility and short biological half time, which limit the utili-zation of RSV in medicine, food and pharmaceutical industries. In this study, we have encapsulated RSV in gelatin nanoparticles (GNPs) and studied its anti-cancer efficacy in NCI-H460 lung cancer cells. SEM and DLS studies have revealed that the prepared RSV-GNPs possess spherical shape with a mean diameter of 294 nm. The successful encapsulation of RSV in GNPs has been achieved by the cross-linker glutaraldehyde probably through Schiff base reaction and hydrogen bond interaction. Spectrophotometric analysis revealed that the max-imum of 93.6% of RSV has been entrapped in GNPs. In vitro drug release kinetics indicated that there was an initial burst release followed by a slow and sustained release of RSV from GNPs. The prepared RSV-GNPs exhibited very rapid and more efficient cellular uptake than free RSV. Further, RSV-GNPs treatment showed greater antiproliferative efficacy than free RSV treatment in NCI-H460 cells. It has been found that greater ROS generation, DNA damage and apoptotic incidence in RSV-GNPs treated cells than free RSV treatment. Erythrocyte aggregation assay showed that the prepared RSV-GNPs formulation elicit no toxic response. HPLC analysis revealed that RSV-GNPs was more bioavailable and had a longer half-life than free RSV. Hence, GNPs carrier system might be a promising mode for controlled delivery and for improved therapeutic index of poorly water soluble RSV.Keywords: resveratrol, coacervation, anticancer gelatin nanoparticles, lung cancer, controlled release
Procedia PDF Downloads 448779 Lateral Sural Artery Perforators: A Cadaveric Dissection Study to Assess Perforator Surface Anatomy Variability and Average Pedicle Length for Flap Reconstruction
Authors: L. Sun, O. Bloom, K. Anderson
Abstract:
The medial and lateral sural artery perforator flaps (MSAP and LSAP, respectively) are two recently described flaps that are less commonly used in lower limb trauma reconstructive surgeries compared to flaps such as the anterolateral thigh (ALT) flap or the gastrocnemius flap. The LSAP flap has several theoretical benefits over the MSAP, including the ability to be sensate and being more easily manoeuvred into position as a local flap for coverage of lateral knee or leg defects. It is less commonly used in part due to a lack of documented studies of the anatomical reliability of the perforator, and an unquantified average length of the pedicle used for microsurgical anastomosis (if used as a free flap) or flap rotation (if used as a pedicled flap). It has been shown to have significantly lower donor site morbidity compared to other flaps such as the ALT, due to the decreased need for intramuscular dissection and resulting in less muscle loss at the donor site. 11 cadaveric lower limbs were dissected, with a mean of 1.6 perforators per leg, with an average pedicle length of 45mm to the sural artery and 70mm to the popliteal artery. While the majority of perforating arteries lay close to the midline (average of 19mm lateral to the midline), there were patients whose artery was significantly lateral and would have been likely injured by the initial incision during an operation. Adding to the literature base of documented LSAP dissections provides a greater understanding of the anatomical basis of these perforator flaps, and the authors hope this will establish them as a more commonly used and discussed option when managing complicated lower limb trauma requiring soft tissue reconstruction.Keywords: cadaveric, dissection, lateral, perforator flap, sural artery, surface anatomy
Procedia PDF Downloads 156778 Nutritionists' Perspective on the Conception of a Telenutrition Platform for Diabetes Care: Qualitative Study
Authors: Choumous Mannoubi, Dahlia Kairy, Brigitte Vachon
Abstract:
The use of technology allows clinicians to provide an individualized approach in a cost-effective manner and to reach a broader client base more easily. Such interventions can be effective in ensuring self-management and follow-up of people with diabetes, reducing the risk of complications by improving accessibility to care services, and better adherence to health recommendations. Consideration of users' opinions and fears to inform the design and implementation stages of these telehealth services seems to be essential to improve their acceptance and usability. The objective of this study is to describe the telepractice of nutritionists supporting the therapeutic management of diabetic patients and document the functional requirements of nutritionists for the design of a tele-nutrition platform. To best identify the requirements and constraints of nutritionists, we conducted individual semi-structured interviews with 10 nutritionists who offered tele-nutrition services. Using a qualitative design with a descriptive approach based on the Nutrition Care Process Model (mNCP) framework, we explored in depth the state of nutritionists' telepractice in public and private health care settings, as well as their requirements for teleconsultation. Qualitative analyses revealed that nutritionists primarily used telephone calls during the COVID 19 pandemic to provide teleconsultations. Nutritionists identified the following important features for the design of a tele-nutrition platform: it should support interprofessional collaboration, allow for the development and monitoring of a care plan, integrate with the existing IT environment, be easy to use, accommodate different levels of patient literacy, and allow for easy sharing of educational materials to support nutrition education.Keywords: telehealth, nutrition, diabetes, telenutrition, teleconsultation, telemonitoring
Procedia PDF Downloads 136777 Traditional Role of Women and Its Implication in Solid Waste Management in Bauchi Metropolis
Authors: Bogoro Audu Gani, Tobi Nzelibe Ajiji Haruna
Abstract:
Women have both knowledge and expertise, whose recognition can lead to more efficient, effective, sustainable, and fair waste management operations. Studies have shown that the failure to take cognizance of the traditional role of women in the management of urban environments results in a serious loss of efficiency and productivity. However, urban managers in developing countries are yet to identify and integrate those critical roles of women into urban environmental management. This research is motivated not only due the poor solid waste management but also by the total neglect of the role of women in solid waste management in the Bauchi metropolis. Systematic random sampling technique was adopted for the selection of the samples and 4% of the study population was taken as the sample size. The major instruments used for data collection were questionnaires, interviews and direct measurement of household solid waste at source and the data is presented in tables and charts. It is found that over 95% of sweeping, cooking and food preparation are exclusively reserved for women in the study area. Women dominate the generation, storage and collection of household solid waste with 81%, 96% and 91%, respectively, within the study area. It is also discovered that segregation can be 95% effectively carried out by women that have free time. However, urban managers in the Bauchi metropolis are yet to identify the role of women with a view to integrating them into solid waste management in order to achieve a healthy and clean living environment in the Bauchi metropolis. Among other suggestions, the paper recommends that the role of women should be identified and integrated into developing policies and programs for a clean and healthy living urban environment; this will not only improve the environmental quality but would also increase the income base of the family.Keywords: women, solid waste, integration, segregation
Procedia PDF Downloads 90776 The Role of Cornulaca aucheri in Stabilization of Degraded Sandy Soil in Kuwait
Authors: Modi M. Ahmed, Noor Al-Dousari, Ali M. Al-Dousari
Abstract:
Cornulaca aucheri is an annual herb consider as disturbance indicator currently visible and widely distributed in disturbed lands in Liyah area. Such area is suffered from severe land degradation due to multiple interacting factors such as, overgrazing, gravel and sand quarrying, military activities and natural process. The restoration program is applied after refilled quarries sites and levelled the surface irregularities in order to rehabilitate the natural vegetation and wildlife to its original shape. During the past 10 years of rehabilitation, noticeable greenery healthy cover of Cornulaca sp. are shown specially around artificial lake and playas. The existence of such species in high density it means that restoration program has succeeded and transit from bare ground state to Cornulaca and annual forb state. This state is lower state of Range State Transition Succession model, but it is better than bare soil. Cornulaca spp is native desert plant grows in arid conditions on sandy, stony ground, near oasis, on sand dunes and in sandy depressions. The sheep and goats are repulsive of it. Despite its spiny leaves, it provides good grazing for camels and is said to increase the milk supply produced by lactating females. It is about 80 cm tall and has stems that branched from the base with new faster greenery growth in the summer. It shows good environmental potential to be managed as natural types used for the restoration of degraded lands in desert areas.Keywords: land degradation, range state transition succession model, rehabilitation, restoration program
Procedia PDF Downloads 373775 Motivating Factors of Mobile Device Applications toward Learning
Authors: Yen-Mei Lee
Abstract:
Mobile learning (m-learning) has been applied in the education field not only because it is an alternative to web-based learning but also it possesses the ‘anytime, anywhere’ learning features. However, most studies focus on the technology-related issue, such as usability and functionality instead of addressing m-learning from the motivational perspective. Accordingly, the main purpose of the current paper is to integrate critical factors from different motivational theories and related findings to have a better understand the catalysts of an individual’s learning motivation toward m-learning. The main research question for this study is stated as follows: based on different motivational perspectives, what factors of applying mobile devices as medium can facilitate people’s learning motivations? Self-Determination Theory (SDT), Uses and Gratification Theory (UGT), Malone and Lepper’s taxonomy of intrinsic motivation theory, and different types of motivation concepts were discussed in the current paper. In line with the review of relevant studies, three motivating factors with five essential elements are proposed. The first key factor is autonomy. Learning on one’s own path and applying personalized format are two critical elements involved in the factor of autonomy. The second key factor is to apply a build-in instant feedback system during m-learning. The third factor is creating an interaction system, including communication and collaboration spaces. These three factors can enhance people’s learning motivations when applying mobile devices as medium toward learning. To sum up, in the currently proposed paper, with different motivational perspectives to discuss the m-learning is different from previous studies which are simply focused on the technical or functional design. Supported by different motivation theories, researchers can clearly understand how the mobile devices influence people’s leaning motivation. Moreover, instructional designers and educators can base on the proposed factors to build up their unique and efficient m-learning environments.Keywords: autonomy, learning motivation, mobile learning (m-learning), motivational perspective
Procedia PDF Downloads 182774 Energy Atlas: Geographic Information Systems-Based Energy Analysis and Planning Tool
Authors: Katarina Pogacnik, Ursa Zakrajsek, Nejc Sirk, Ziga Lampret
Abstract:
Due to an increase in living standards along with global population growth and a trend of urbanization, municipalities and regions are faced with an ever rising energy demand. A challenge has arisen for cities around the world to modify the energy supply chain in order to reduce its consumption and CO₂ emissions. The aim of our work is the development of a computational-analytical platform for dynamic support in decision-making and the determination of economic and technical indicators of energy efficiency in a smart city, named Energy Atlas. Similar products in this field focuse on a narrower approach, whereas in order to achieve its aim, this platform encompasses a wider spectrum of beneficial and important information for energy planning on a local or regional scale. GIS based interactive maps provide an extensive database on the potential, use and supply of energy and renewable energy sources along with climate, transport and spatial data of the selected municipality. Beneficiaries of Energy atlas are local communities, companies, investors, contractors as well as residents. The Energy Atlas platform consists of three modules named E-Planning, E-Indicators and E-Cooperation. The E-Planning module is a comprehensive data service, which represents a support towards optimal decision-making and offers a sum of solutions and feasibility of measures and their effects in the area of efficient use of energy and renewable energy sources. The E-Indicators module identifies, collects and develops optimal data and key performance indicators and develops an analytical application service for dynamic support in managing a smart city in regards to energy use and sustainable environment. In order to support cooperation and direct involvement of citizens of the smart city, the E-cooperation is developed with the purpose of integrating the interdisciplinary and sociological aspects of energy end-users. Interaction of all the above-described modules contributes to regional development because it enables for a precise assessment of the current situation, strategic planning, detection of potential future difficulties and also the possibility of public involvement in decision-making. From the implementation of the technology in Slovenian municipalities of Ljubljana, Piran, and Novo mesto, there is evidence to suggest that the set goals are to be achieved to a great extent. Such thorough urban energy planning tool is viewed as an important piece of the puzzle towards achieving a low-carbon society, circular economy and therefore, sustainable society.Keywords: circular economy, energy atlas, energy management, energy planning, low-carbon society
Procedia PDF Downloads 306773 Using a Card Game as a Tool for Developing a Design
Authors: Matthias Haenisch, Katharina Hermann, Marc Godau, Verena Weidner
Abstract:
Over the past two decades, international music education has been characterized by a growing interest in informal learning for formal contexts and a "compositional turn" that has moved from closed to open forms of composing. This change occurs under social and technological conditions that permeate 21st-century musical practices. This forms the background of Musical Communities in the (Post)Digital Age (MusCoDA), a four-year joint research project of the University of Erfurt (UE) and the University of Education Karlsruhe (PHK), funded by the German Federal Ministry of Education and Research (BMBF). Both explore songwriting processes as an example of collective creativity in (post)digital communities, one in formal and the other in informal learning contexts. Collective songwriting will be studied from a network perspective, that will allow us to view boundaries between both online and offline as well as formal and informal or hybrid contexts as permeable and to reconstruct musical learning practices. By comparing these songwriting processes, possibilities for a pedagogical-didactic interweaving of different educational worlds are highlighted. Therefore, the subproject of the University of Erfurt investigates school music lessons with the help of interviews, videography, and network maps by analyzing new digital pedagogical and didactic possibilities. In the first step, the international literature on songwriting in the music classroom was examined for design development. The analysis focused on the question of which methods and practices are circulating in the current literature. Results from this stage of the project form the basis for the first instructional design that will help teachers in planning regular music classes and subsequently reconstruct musical learning practices under these conditions. In analyzing the literature, we noticed certain structural methods and concepts that recur, such as the Building Blocks method and the pre-structuring of the songwriting process. From these findings, we developed a deck of cards that both captures the current state of research and serves as a method for design development. With this deck of cards, both teachers and students themselves can plan their individual songwriting lessons by independently selecting and arranging topic, structure, and action cards. In terms of science communication, music educators' interactions with the card game provide us with essential insights for developing the first design. The overall goal of MusCoDA is to develop an empirical model of collective musical creativity and learning and an instructional design for teaching music in the postdigital age.Keywords: card game, collective songwriting, community of practice, network, postdigital
Procedia PDF Downloads 64772 Designing an Editorialization Environment for Repeatable Self-Correcting Exercises
Authors: M. Kobylanski, D. Buskulic, P.-H. Duron, D. Revuz, F. Ruggieri, E. Sandier, C. Tijus
Abstract:
In order to design a cooperative e-learning platform, we observed teams of Teacher [T], Computer Scientist [CS] and exerciser's programmer-designer [ED] cooperating for the conception of a self-correcting exercise, but without the use of such a device in order to catch the kind of interactions a useful platform might provide. To do so, we first run a task analysis on how T, CS and ED should be cooperating in order to achieve, at best, the task of creating and implementing self-directed, self-paced, repeatable self-correcting exercises (RSE) in the context of open educational resources. The formalization of the whole process was based on the “objectives, activities and evaluations” theory of educational task analysis. Second, using the resulting frame as a “how-to-do it” guide, we run a series of three contrasted Hackathon of RSE-production to collect data about the cooperative process that could be later used to design the collaborative e-learning platform. Third, we used two complementary methods to collect, to code and to analyze the adequate survey data: the directional flow of interaction among T-CS-ED experts holding a functional role, and the Means-End Problem Solving analysis. Fourth, we listed the set of derived recommendations useful for the design of the exerciser as a cooperative e-learning platform. Final recommendations underline the necessity of building (i) an ecosystem that allows to sustain teams of T-CS-ED experts, (ii) a data safety platform although offering accessibility and open discussion about the production of exercises with their resources and (iii) a good architecture allowing the inheritance of parts of the coding of any exercise already in the data base as well as fast implementation of new kinds of exercises along with their associated learning activities.Keywords: editorialization, open educational resources, pedagogical alignment, produsage, repeatable self-correcting exercises, team roles
Procedia PDF Downloads 124771 Excited State Structural Dynamics of Retinal Isomerization Revealed by a Femtosecond X-Ray Laser
Authors: Przemyslaw Nogly, Tobias Weinert, Daniel James, Sergio Carbajo, Dmitry Ozerov, Antonia Furrer, Dardan Gashi, Veniamin Borin, Petr Skopintsev, Kathrin Jaeger, Karol Nass, Petra Bath, Robert Bosman, Jason Koglin, Matthew Seaberg, Thomas Lane, Demet Kekilli, Steffen Brünle, Tomoyuki Tanaka, Wenting Wu, Christopher Milne, Thomas A. White, Anton Barty, Uwe Weierstall, Valerie Panneels, Eriko Nango, So Iwata, Mark Hunter, Igor Schapiro, Gebhard Schertler, Richard Neutze, Jörg Standfuss
Abstract:
Ultrafast isomerization of retinal is the primary step in a range of photoresponsive biological functions including vision in humans and ion-transport across bacterial membranes. We studied the sub-picosecond structural dynamics of retinal isomerization in the light-driven proton pump bacteriorhodopsin using an X-ray laser. Twenty snapshots with near-atomic spatial and temporal resolution in the femtosecond regime show how the excited all-trans retinal samples conformational states within the protein binding pocket prior to passing through a highly-twisted geometry and emerging in the 13-cis conformation. The aspartic acid residues and functional water molecules in proximity of the retinal Schiff base respond collectively to formation and decay of the initial excited state and retinal isomerization. These observations reveal how the protein scaffold guides this remarkably efficient photochemical reaction.Keywords: bacteriorhodopsin, free-electron laser, retinal isomerization mechanism, time-resolved crystallography
Procedia PDF Downloads 251770 Optimal Emergency Shipment Policy for a Single-Echelon Periodic Review Inventory System
Authors: Saeed Poormoaied, Zumbul Atan
Abstract:
Emergency shipments provide a powerful mechanism to alleviate the risk of imminent stock-outs and can result in substantial benefits in an inventory system. Customer satisfaction and high service level are immediate consequences of utilizing emergency shipments. In this paper, we consider a single-echelon periodic review inventory system consisting of a single local warehouse, being replenished from a central warehouse with ample capacity in an infinite horizon setting. Since the structure of the optimal policy appears to be complicated, we analyze this problem under an order-up-to-S inventory control policy framework, the (S, T) policy, with the emergency shipment consideration. In each period of the periodic review policy, there is a single opportunity at any point of time for the emergency shipment so that in case of stock-outs, an emergency shipment is requested. The goal is to determine the timing and amount of the emergency shipment during a period (emergency shipment policy) as well as the base stock periodic review policy parameters (replenishment policy). We show that how taking advantage of having an emergency shipment during periods improves the performance of the classical (S, T) policy, especially when fixed and unit emergency shipment costs are small. Investigating the structure of the objective function, we develop an exact algorithm for finding the optimal solution. We also provide a heuristic and an approximation algorithm for the periodic review inventory system problem. The experimental analyses indicate that the heuristic algorithm is computationally more efficient than the approximation algorithm, but in terms of the solution efficiency, the approximation algorithm performs very well. We achieve up to 13% cost savings in the (S, T) policy if we apply the proposed emergency shipment policy. Moreover, our computational results reveal that the approximated solution is often within 0.21% of the globally optimal solution.Keywords: emergency shipment, inventory, periodic review policy, approximation algorithm.
Procedia PDF Downloads 141769 Students Competencies in the Use of Computer Assistive Technology at Akropong School for the Blind in the Eastern of Ghana
Authors: Joseph Ampratwum, Yaw Nyadu Offei, Afua Ntoaduro, Frank Twum
Abstract:
The use of computer assistive technology has captured the attention of individuals with visual impairment. Children with visual impairments who are tactual learners have one unique need which is quite different from all other disability groups. They depend on the use of computer assistive technology for reading, writing, receiving information and sending information as well. The objective of the study was to assess students’ competencies in the use of computer assistive technology at Akropong School for the Blind in Ghana. This became necessary because little research has been conducted to document the competencies and challenges in the use of computer among students with visual impairments in Africa. A case study design with a mixed research strategy was adopted for the study. A purposive sampling technique was used to sample 35 students from Akropong School for the Blind in the eastern region of Ghana. The researcher gathered both quantitative and qualitative data to measure students’ competencies in keyboarding skills and Job Access with Speech (JAWS), as well as the other challenges. The findings indicated that comparatively students’ competency in keyboard skills was higher than JAWS application use. Thus students had reached higher stages in the conscious competencies matrix in the former than the latter. It was generally noted that challenges limiting effective use of students’ competencies in computer assistive technology in the School were more personal than external influences. This was because most of the challenges were due to the individual response to the training and familiarity in developing their competencies in using computer assistive technology. Base on this it was recommended that efforts should be made to stock up the laboratory with additional computers. Directly in line with the first recommendation, it was further suggested that more practice time should be created for the students to maximize computer use. Also Licensed JAWS must be acquired by the school to advance students’ competence in using computer assistive technology.Keywords: computer assistive technology, job access with speech, keyboard, visual impairment
Procedia PDF Downloads 345768 Determining the Spatial Vulnerability Levels and Typologies of Coastal Cities to Climate Change: Case of Turkey
Authors: Mediha B. Sılaydın Aydın, Emine D. Kahraman
Abstract:
One of the important impacts of climate change is the sea level rise. Turkey is a peninsula, so the coastal areas of the country are threatened by the problem of sea level rise. Therefore, the urbanized coastal areas are highly vulnerable to climate change. At the aim of enhancing spatial resilience of urbanized areas, this question arises: What should be the priority intervention subject in the urban planning process for a given city. To answer this question, by focusing on the problem of sea level rise, this study aims to determine spatial vulnerability typologies and levels of Turkey coastal cities based on morphological, physical and social characteristics. As a method, spatial vulnerability of coastal cities is determined by two steps as level and type. Firstly, physical structure, morphological structure and social structure were examined in determining spatial vulnerability levels. By determining these levels, most vulnerable areas were revealed as a priority in adaptation studies. Secondly, all parameters are also used to determine spatial typologies. Typologies are determined for coastal cities in order to use as a base for urban planning studies. Adaptation to climate change is crucial for developing countries like Turkey so, this methodology and created typologies could be a guide for urban planners as spatial directors and an example for other developing countries in the context of adaptation to climate change. The results demonstrate that the urban settlements located on the coasts of the Marmara Sea, the Aegean Sea and the Mediterranean respectively, are more vulnerable than the cities located on the Black Sea’s coasts to sea level rise.Keywords: climate change, coastal cities, vulnerability, urban land use planning
Procedia PDF Downloads 327767 Modeling of Tsunami Propagation and Impact on West Vancouver Island, Canada
Authors: S. Chowdhury, A. Corlett
Abstract:
Large tsunamis strike the British Columbia coast every few hundred years. The Cascadia Subduction Zone, which extends along the Pacific coast from Vancouver Island to Northern California is one of the most seismically active regions in Canada. Significant earthquakes have occurred in this region, including the 1700 Cascade Earthquake with an estimated magnitude of 9.2. Based on geological records, experts have predicted a 'great earthquake' of a similar magnitude within this region may happen any time. This earthquake is expected to generate a large tsunami that could impact the coastal communities on Vancouver Island. Since many of these communities are in remote locations, they are more likely to be vulnerable, as the post-earthquake relief efforts would be impacted by the damage to critical road infrastructures. To assess the coastal vulnerability within these communities, a hydrodynamic model has been developed using MIKE-21 software. We have considered a 500 year probabilistic earthquake design criteria including the subsidence in this model. The bathymetry information was collected from Canadian Hydrographic Services (CHS), and National Oceanic Atmospheric and Administration (NOAA). The arial survey was conducted using a Cessna-172 aircraft for the communities, and then the information was converted to generate a topographic digital elevation map. Both survey information was incorporated into the model, and the domain size of the model was about 1000km x 1300km. This model was calibrated with the tsunami occurred off the west coast of Moresby Island on October 28, 2012. The water levels from the model were compared with two tide gauge stations close to the Vancouver Island and the output from the model indicates the satisfactory result. For this study, the design water level was considered as High Water Level plus the Sea Level Rise for 2100 year. The hourly wind speeds from eight directions were collected from different wind stations and used a 200-year return period wind speed in the model for storm events. The regional model was set for 12 hrs simulation period, which takes more than 16 hrs to complete one simulation using double Xeon-E7 CPU computer plus a K-80 GPU. The boundary information for the local model was generated from the regional model. The local model was developed using a high resolution mesh to estimate the coastal flooding for the communities. It was observed from this study that many communities will be effected by the Cascadia tsunami and the inundation maps were developed for the communities. The infrastructures inside the coastal inundation area were identified. Coastal vulnerability planning and resilient design solutions will be implemented to significantly reduce the risk.Keywords: tsunami, coastal flooding, coastal vulnerable, earthquake, Vancouver, wave propagation
Procedia PDF Downloads 132766 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback
Authors: Takuro Kida, Yuichi Kida
Abstract:
In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization
Procedia PDF Downloads 145765 Lower Limb Oedema in Beckwith-Wiedemann Syndrome
Authors: Mihai-Ionut Firescu, Mark A. P. Carson
Abstract:
We present a case of inferior vena cava agenesis (IVCA) associated with bilateral deep venous thrombosis (DVT) in a patient with Beckwith-Wiedemann syndrome (BWS). In adult patients with BWS presenting with bilateral lower limb oedema, specific aetiological factors should be considered. These include cardiomyopathy and intraabdominal tumours. Congenital malformations of the IVC, through causing relative venous stasis, can lead to lower limb oedema either directly or indirectly by favouring lower limb venous thromboembolism; however, they are yet to be reported as an associated feature of BWS. Given its life-threatening potential, the prompt initiation of treatment for bilateral DVT is paramount. In BWS patients, however, this can prove more complicated. Due to overgrowth, the above-average birth weight can continue throughout childhood. In this case, the patient’s weight reached 170 kg, impacting on anticoagulation choice, as direct oral anticoagulants have a limited evidence base in patients with a body mass above 120 kg. Furthermore, the presence of IVCA leads to a long-term increased venous thrombosis risk. Therefore, patients with IVCA and bilateral DVT warrant specialist consideration and may benefit from multidisciplinary team management, with hematology and vascular surgery input. Conclusion: Here, we showcased a rare cause for bilateral lower limb oedema, respectively bilateral deep venous thrombosis complicating IVCA in a patient with Beckwith-Wiedemann syndrome. The importance of this case lies in its novelty, as the association between IVC agenesis and BWS has not yet been described. Furthermore, the treatment of DVT in such situations requires special consideration, taking into account the patient’s weight and the presence of a significant, predisposing vascular abnormality.Keywords: Beckwith-Wiedemann syndrome, bilateral deep venous thrombosis, inferior vena cava agenesis, venous thromboembolism
Procedia PDF Downloads 239764 Assessment of Forest Above Ground Biomass Through Linear Modeling Technique Using SAR Data
Authors: Arjun G. Koppad
Abstract:
The study was conducted in Joida taluk of Uttara Kannada district, Karnataka, India, to assess the land use land cover (LULC) and forest aboveground biomass using L band SAR data. The study area covered has dense, moderately dense, and sparse forests. The sampled area was 0.01 percent of the forest area with 30 sampling plots which were selected randomly. The point center quadrate (PCQ) method was used to select the tree and collected the tree growth parameters viz., tree height, diameter at breast height (DBH), and diameter at the tree base. The tree crown density was measured with a densitometer. Each sample plot biomass was estimated using the standard formula. In this study, the LULC classification was done using Freeman-Durden, Yamaghuchi and Pauli polarimetric decompositions. It was observed that the Freeman-Durden decomposition showed better LULC classification with an accuracy of 88 percent. An attempt was made to estimate the aboveground biomass using SAR backscatter. The ALOS-2 PALSAR-2 L-band data (HH, HV, VV &VH) fully polarimetric quad-pol SAR data was used. SAR backscatter-based regression model was implemented to retrieve forest aboveground biomass of the study area. Cross-polarization (HV) has shown a good correlation with forest above-ground biomass. The Multi Linear Regression analysis was done to estimate aboveground biomass of the natural forest areas of the Joida taluk. The different polarizations (HH &HV, VV &HH, HV & VH, VV&VH) combination of HH and HV polarization shows a good correlation with field and predicted biomass. The RMSE and value for HH & HV and HH & VV were 78 t/ha and 0.861, 81 t/ha and 0.853, respectively. Hence the model can be recommended for estimating AGB for the dense, moderately dense, and sparse forest.Keywords: forest, biomass, LULC, back scatter, SAR, regression
Procedia PDF Downloads 28763 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks
Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang
Abstract:
The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.Keywords: femtocell networks, game theory, interference mitigation, spectrum allocation
Procedia PDF Downloads 158762 An Ecological Approach to Understanding Student Absenteeism in a Suburban, Kansas School
Authors: Andrew Kipp
Abstract:
Student absenteeism is harmful to both the school and the absentee student. One approach to improving student absenteeism is targeting contextual factors within the students’ learning environment. However, contemporary literature has not taken an ecological agency approach to understanding student absenteeism. Ecological agency is a theoretical framework that magnifies the interplay between the environment and the actions of people within the environment. To elaborate, the person’s personal history and aspirations and the environmental conditions provide potential outlets or restrictions to their intended action. The framework provides the unique perspective of understanding absentee students’ decision-making through the affordances and constraints found in their learning environment. To that effect, the study was guided by the question, “Why do absentee students decide to engage in absenteeism in a suburban Kansas school?” A case study methodology was used to answer the research question. Four suburban, Kansas high school absentee students in the 2020-2021 school year were selected for the study. The fall 2020 semester was in a remote learning setting, and the spring 2021 semester was in an in-person learning setting. The study captured their decision-making with respect to school attendance throughsemi-structured interviews, prolonged observations, drawings, and concept maps. The data was analyzed through thematic analysis. The findings revealed that peer socialization opportunities, methods of instruction, shifts in cultural beliefs due to COVID-19, manifestations of anxiety and lack of space to escape their anxiety, social media bullying, and the inability to receive academic tutoring motivated the participants’ daily decision to either attend or miss school. The findings provided a basis to improve several institutional and classroom practices. These practices included more student-led instruction and less teacher-led instruction in both in-person and remote learning environments, promoting socialization through classroom collaboration and clubs based on emerging student interests, reducing instances of bullying through prosocial education, safe spaces for students to escape the classroom to manage their anxiety, and more opportunities for one-on-one tutoring to improve grades. The study provides an example of using the ecological agency approach to better understand the personal and environmental factors that lead to absenteeism. The study also informs educational policies and classroom practices to better promote student attendance. Further research should investigate other school contexts using the ecological agency theoretical framework to better understand the influence of the school environment on student absenteeism.Keywords: student absenteeism, ecological agency, classroom practices, educational policy, student decision-making
Procedia PDF Downloads 144761 Performance Evaluation of Solid Lubricant Characteristics at Different Sliding Conditions
Authors: Suresh Kumar Reddy Narala, Rakesh Kumar Gunda
Abstract:
In modern industry, mechanical parts are subjected to friction and wear, leading to heat generation, which affects the reliability, life and power consumption of machinery. To overcome the tribological losses due to friction and wear, a significant portion of lubricant with high viscous properties allows very smooth relative motion between two sliding surfaces. Advancement in modern tribology has facilitated the use of applying solid lubricants in various industrial applications. Solid lubricant additives with high viscous thin film formation between the sliding surfaces can adequately wet and adhere to a work surface. In the present investigation, an attempt has been made to investigate and evaluate the tribological studies of various solid lubricants like MoS¬2, graphite, and boric acid at different sliding conditions. The base oil used in this study was SAE 40 oil with a viscosity of 220 cSt at 400C. The tribological properties were measured on pin-on-disc tribometer. An experimental set-up has been developed for effective supply of solid lubricants to the pin-disc interface zone. The results obtained from the experiments show that the friction coefficient increases with increase in applied load for all the considered environments. The tribological properties with MoS2 solid lubricant exhibit larger load carrying capacity than that of graphite and boric acid. The present research work also contributes to the understanding of the behavior of film thickness distribution of solid lubricant using potential contact technique under different sliding conditions. The results presented in this research work are expected to form a scientific basis for selecting the best solid lubricant in various industrial applications for possible minimization of friction and wear.Keywords: friction, wear, temperature, solid lubricant
Procedia PDF Downloads 348760 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers
Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy
Abstract:
In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology
Procedia PDF Downloads 103759 Ontology based Fault Detection and Diagnosis system Querying and Reasoning examples
Authors: Marko Batic, Nikola Tomasevic, Sanja Vranes
Abstract:
One of the strongholds in the ubiquitous efforts related to the energy conservation and energy efficiency improvement is represented by the retrofit of high energy consumers in buildings. In general, HVAC systems represent the highest energy consumers in buildings. However they usually suffer from mal-operation and/or malfunction, causing even higher energy consumption than necessary. Various Fault Detection and Diagnosis (FDD) systems can be successfully employed for this purpose, especially when it comes to the application at a single device/unit level. In the case of more complex systems, where multiple devices are operating in the context of the same building, significant energy efficiency improvements can only be achieved through application of comprehensive FDD systems relying on additional higher level knowledge, such as their geographical location, served area, their intra- and inter- system dependencies etc. This paper presents a comprehensive FDD system that relies on the utilization of common knowledge repository that stores all critical information. The discussed system is deployed as a test-bed platform at the two at Fiumicino and Malpensa airports in Italy. This paper aims at presenting advantages of implementation of the knowledge base through the utilization of ontology and offers improved functionalities of such system through examples of typical queries and reasoning that enable derivation of high level energy conservation measures (ECM). Therefore, key SPARQL queries and SWRL rules, based on the two instantiated airport ontologies, are elaborated. The detection of high level irregularities in the operation of airport heating/cooling plants is discussed and estimation of energy savings is reported.Keywords: airport ontology, knowledge management, ontology modeling, reasoning
Procedia PDF Downloads 540758 Modelling of a Biomechanical Vertebral System for Seat Ejection in Aircrafts Using Lumped Mass Approach
Authors: R. Unnikrishnan, K. Shankar
Abstract:
In the case of high-speed fighter aircrafts, seat ejection is designed mainly for the safety of the pilot in case of an emergency. Strong windblast due to the high velocity of flight is one main difficulty in clearing the tail of the aircraft. Excessive G-forces generated, immobilizes the pilot from escape. In most of the cases, seats are ejected out of the aircrafts by explosives or by rocket motors attached to the bottom of the seat. Ejection forces are primarily in the vertical direction with the objective of attaining the maximum possible velocity in a specified period of time. The safe ejection parameters are studied to estimate the critical time of ejection for various geometries and velocities of flight. An equivalent analytical 2-dimensional biomechanical model of the human spine has been modelled consisting of vertebrae and intervertebral discs with a lumped mass approach. The 24 vertebrae, which consists of the cervical, thoracic and lumbar regions, in addition to the head mass and the pelvis has been designed as 26 rigid structures and the intervertebral discs are assumed as 25 flexible joint structures. The rigid structures are modelled as mass elements and the flexible joints as spring and damper elements. Here, the motions are restricted only in the mid-sagittal plane to form a 26 degree of freedom system. The equations of motions are derived for translational movement of the spinal column. An ejection force with a linearly increasing acceleration profile is applied as vertical base excitation on to the pelvis. The dynamic vibrational response of each vertebra in time-domain is estimated.Keywords: biomechanical model, lumped mass, seat ejection, vibrational response
Procedia PDF Downloads 231757 Experimental Investigation on Geosynthetic-Reinforced Soil Sections via California Bearing Ratio Test
Authors: S. Abdi Goudazri, R. Ziaie Moayed, A. Nazeri
Abstract:
Loose soils normally are of weak bearing capacity due to their structural nature. Being exposed to heavy traffic loads, they would fail in most cases. To tackle the aforementioned issue, geotechnical engineers have come up with different approaches; one of which is making use of geosynthetic-reinforced soil-aggregate systems. As these polymeric reinforcements have highlighted economic and environmentally-friendly features, they have become widespread in practice during the last decades. The present research investigates the efficiency of four different types of these reinforcements in increasing the bearing capacity of two-layered soil sections using a series California Bearing Ratio (CBR) test. The studied sections are comprised of a 10 cm-thick layer of no. 161 Firouzkooh sand (weak subgrade) and a 10 cm-thick layer of compacted aggregate materials (base course) classified as SP and GW according to the United Soil Classification System (USCS), respectively. The aggregate layer was compacted to the relative density (Dr) of 95% at the optimum water content (Wopt) of 6.5%. The applied reinforcements were including two kinds of geocomposites (type A and B), a geotextile, and a geogrid that were embedded at the interface of the lower and the upper layers of the soil-aggregate system. As the standard CBR mold was not appropriate in height for this study, the mold used for soaked CBR tests were utilized. To make a comparison between the results of stress-settlement behavior in the studied specimens, CBR values pertinent to the penetrations of 2.5 mm and 5 mm were considered. The obtained results demonstrated 21% and 24.5% increments in the amount of CBR value in the presence of geocomposite type A and geogrid, respectively. On the other hand, the effect of both geotextile and geocomposite type B on CBR values was generally insignificant in this research.Keywords: geosynthetics, geogrid, geotextile, CBR test, increasing bearing capacity
Procedia PDF Downloads 111756 Electrochemical APEX for Genotyping MYH7 Gene: A Low Cost Strategy for Minisequencing of Disease Causing Mutations
Authors: Ahmed M. Debela, Mayreli Ortiz , Ciara K. O´Sullivan
Abstract:
The completion of the human genome Project (HGP) has paved the way for mapping the diversity in the overall genome sequence which helps to understand the genetic causes of inherited diseases and susceptibility to drugs or environmental toxins. Arrayed primer extension (APEX) is a microarray based minisequencing strategy for screening disease causing mutations. It is derived from Sanger DNA sequencing and uses fluorescently dideoxynucleotides (ddNTPs) for termination of a growing DNA strand from a primer with its 3´- end designed immediately upstream of a site where single nucleotide polymorphism (SNP) occurs. The use of DNA polymerase offers a very high accuracy and specificity to APEX which in turn happens to be a method of choice for multiplex SNP detection. Coupling the high specificity of this method with the high sensitivity, low cost and compatibility for miniaturization of electrochemical techniques would offer an excellent platform for detection of mutation as well as sequencing of DNA templates. We are developing an electrochemical APEX for the analysis of SNPs found in the MYH7 gene for group of cardiomyopathy patients. ddNTPs were labeled with four different redox active compounds with four distinct potentials. Thiolated oligonucleotide probes were immobilised on gold and glassy carbon substrates which are followed by hybridisation with complementary target DNA just adjacent to the base to be extended by polymerase. Electrochemical interrogation was performed after the incorporation of the redox labelled dedioxynucleotide. The work involved the synthesis and characterisation of the redox labelled ddNTPs, optimisation and characterisation of surface functionalisation strategies and the nucleotide incorporation assays.Keywords: array based primer extension, labelled ddNTPs, electrochemical, mutations
Procedia PDF Downloads 246755 Stabilization of Lateritic Soil Sample from Ijoko with Cement Kiln Dust and Lime
Authors: Akinbuluma Ayodeji Theophilus, Adewale Olutaiwo
Abstract:
When building roads and paved surfaces, a strong foundation is always essential. A durable material that can withstand years of traffic while staying trustworthy must be used to build the foundation. A frequent problem in the construction of roads and pavements is the lack of high-quality, long-lasting materials for the pavement structure (base, subbase, and subgrade). Hence, this study examined the stabilization of lateritic soil samples from Ijoko with cement kiln dust and lime. The study adopted the experimental design. Laboratory tests were conducted on classification, swelling potential, compaction, California bearing ratio (CBR), and unconfined compressive tests, among others, were conducted on the laterite sample treated with cement kiln dust (CKD) and lime in incremental order of 2% up to 10% of dry weight soft soil sample. The results of the test showed that the studied soil could be classified as an A-7-6 and CL soil using the American Association of State Highway and transport officials (AASHTO) and the unified soil classification system (USCS), respectively. The plasticity (PI) of the studied soil reduced from 30.5% to 29.9% at the application of CKD. The maximum dry density on the application of CKD reduced from 1.9.7 mg/m3 to 1.86mg/m3, and lime application yielded a reduction from 1.97mg/m3 to 1.88.mg/m3. The swell potential on CKD application was reduced from 0.05 to 0.039%. The study concluded that soil stabilizations are effective and economic way of improving road pavement for engineering benefit. The degree of effectiveness of stabilization in pavement construction was found to depend on the type of soil to be stabilized. The study therefore recommended that stabilized soil mixtures should be used to subbase material for flexible pavement since is a suitable.Keywords: lateritic soils, sand, cement, stabilization, road pavement
Procedia PDF Downloads 91