Search results for: flow and heat transfer
427 Identifying Controlling Factors for the Evolution of Shallow Groundwater Chemistry of Ellala Catchment, Northern Ethiopia
Authors: Grmay Kassa Brhane, Hailemariam Siyum Mekonen
Abstract:
This study was designed to identify the hydrogeochemical and anthropogenic processes controlling the evaluation of groundwater chemistry in the Ellala catchment which covers about 296.5 km2 areal extent. The chemical analysis revealed that the major ions in the groundwater are Ca2+, Mg2+, Na+, and K+ (cations) and HCO3-, PO43-, Cl-, NO3-, and SO42-(anions). Most of the groundwater samples (68.42%) revealed that the groundwater in the catchment is non-alkaline. In addition to the contribution of aquifer material, the solid materials and liquid wastes discharged from different sources can be the main sources of pH and EC in the groundwater. It is observed that the EC of the groundwater is fairly correlated with the DTS. This indicates that high mineralized water is more conductor than water with low concentration. The degree of salinity of the groundwater increases along the groundwater flow path from East to West; then, areas surrounding Mekelle City are highly saline due to the liquid and solid wastes discharged from the city and the industries. The groundwater facies in the catchment are predominated with calcium, magnesium, and bicarbonate which are labeled as Ca-Mg-HCO3 and Mg-Ca-HCO3. The main geochemical process controlling the evolution of the groundwater chemistry in the catchment is rock-water interaction, particularly carbonate dissolution. Due to the clay layer in the aquifer, the reverse is ion exchange. Non-significant silicate weathering and halite dissolution also contribute to the evolution of groundwater chemistry in the catchment. The groundwater in the catchment is dominated by the meteoritic origin although it needs further groundwater chemistry study with isotope dating analysis. The groundwater is under-saturated with calcite, dolomite, and aragonite minerals; hence, the more these minerals encounter the groundwater, the more the minerals dissolve. The main source of calcium and magnesium in groundwater is the dissolution of carbonate minerals (calcite and dolomite) since carbonate rocks are the dominant aquifer materials in the catchment. In addition to this, the weathering of dolerite rock is a possible source of magnesium ions. The relatively higher concentration of sodium over chloride indicates that the source of sodium-ion is reverse ion exchange and/or weathering of sodium-bearing materials, such as shale and dolerite rather than halite dissolution. High concentration of phosphate, nitrate, and chloride in the groundwater is the main anthropogenic source that needs treatment, quality control, and management in the catchment. From the Base Exchange Index Analysis, it is possible to understand that, in the catchment, the groundwater is dominated by the meteoritic origin, although it needs further groundwater chemistry study with isotope dating analysis.Keywords: Ellala catchment, factor, chemistry, geochemical, groundwater
Procedia PDF Downloads 76426 Lean Implementation in a Nurse Practitioner Led Pediatric Primary Care Clinic: A Case Study
Authors: Lily Farris, Chantel E. Canessa, Rena Heathcote, Susan Shumay, Suzanna V. McRae, Alissa Collingridge, Minna K. Miller
Abstract:
Objective: To describe how the Lean approach can be applied to improve access, quality and safety of care in an ambulatory pediatric primary care setting. Background: Lean was originally developed by Toyota manufacturing in Japan, and subsequently adapted for use in the healthcare sector. Lean is a systematic approach, focused on identifying and reducing waste within organizational processes, improving patient-centered care and efficiency. Limited literature is available on the implementation of the Lean methodologies in a pediatric ambulatory care setting. Methods: A strategic continuous improvement event or Rapid Process Improvement Workshop (RPIW) was launched with the aim evaluating and structurally supporting clinic workflow, capacity building, sustainability, and ultimately improving access to care and enhancing the patient experience. The Lean process consists of five specific activities: Current state/process assessment (value stream map); development of a future state map (value stream map after waste reduction); identification, quantification and prioritization of the process improvement opportunities; implementation and evaluation of process changes; and audits to sustain the gains. Staff engagement is a critical component of the Lean process. Results: Through the implementation of the RPIW and shifting workload among the administrative team, four hours of wasted time moving between desks and doing work was eliminated from the Administrative Clerks role. To streamline clinic flow, the Nursing Assistants completed patient measurements and vitals for Nurse Practitioners, reducing patient wait times and adding value to the patients visit with the Nurse Practitioners. Additionally, through the Nurse Practitioners engagement in the Lean processes a need was recognized to articulate clinic vision, mission and the alignment of NP role and scope of practice with the agency and Ministry of Health strategic plan. Conclusions: Continuous improvement work in the Pediatric Primary Care NP Clinic has provided a unique opportunity to improve the quality of care delivered and has facilitated further alignment of the daily continuous improvement work with the strategic priorities of the Ministry of Health.Keywords: ambulatory care, lean, pediatric primary care, system efficiency
Procedia PDF Downloads 300425 Nanoparticle Supported, Magnetically Separable Metalloporphyrin as an Efficient Retrievable Heterogeneous Nanocatalyst in Oxidation Reactions
Authors: Anahita Mortazavi Manesh, Mojtaba Bagherzadeh
Abstract:
Metalloporphyrins are well known to mimic the activity of monooxygenase enzymes. In this regard, metalloporphyrin complexes have been largely employed as valuable biomimetic catalysts, owing to the critical roles they play in oxygen transfer processes in catalytic oxidation reactions. Investigating in this area is based on different strategies to design selective, stable and high turnover catalytic systems. Immobilization of expensive metalloporphyrin catalysts onto supports appears to be a good way to improve their stability, selectivity and the catalytic performance because of the support environment and other advantages with respect to recovery, reuse. In other words, supporting metalloporphyrins provides a physical separation of active sites, thus minimizing catalyst self-destruction and dimerization of unhindered metalloporphyrins. Furthermore, heterogeneous catalytic oxidations have become an important target since their process are used in industry, helping to minimize the problems of industrial waste treatment. Hence, the immobilization of these biomimetic catalysts is much desired. An attractive approach is the preparation of the heterogeneous catalyst involves immobilization of complexes on silica coated magnetic nano-particles. Fe3O4@SiO2 magnetic nanoparticles have been studied extensively due to their superparamagnetism property, large surface area to volume ratio and easy functionalization. Using heterogenized homogeneous catalysts is an attractive option to facile separation of catalyst, simplified product work-up and continuity of catalytic system. Homogeneous catalysts immobilized on magnetic nanoparticles (MNPs) surface occupy a unique position due to combining the advantages of both homogeneous and heterogeneous catalysts. In addition, superparamagnetic nature of MNPs enable very simple separation of the immobilized catalysts from the reaction mixture using an external magnet. In the present work, an efficient heterogeneous catalyst was prepared by immobilizing manganese porphyrin on functionalized magnetic nanoparticles through the amino propyl linkage. The prepared catalyst was characterized by elemental analysis, FT-IR spectroscopy, X-ray powder diffraction, atomic absorption spectroscopy, UV-Vis spectroscopy, and scanning electron microscopy. Application of immobilized metalloporphyrin in the oxidation of various organic substrates was explored using Gas chromatographic (GC) analyses. The results showed that the supported Mn-porphyrin catalyst (Fe3O4@SiO2-NH2@MnPor) is an efficient and reusable catalyst in oxidation reactions. Our catalytic system exhibits high catalytic activity in terms of turnover number (TON) and reaction conditions. Leaching and recycling experiments revealed that nanocatalyst can be recovered several times without loss of activity and magnetic properties. The most important advantage of this heterogenized catalytic system is the simplicity of the catalyst separation in which the catalyst can be separated from the reaction mixture by applying a magnet. Furthermore, the separation and reuse of the magnetic Fe3O4 nanoparticles were very effective and economical.Keywords: Fe3O4 nanoparticle, immobilized metalloporphyrin, magnetically separable nanocatalyst, oxidation reactions
Procedia PDF Downloads 299424 Using Business Simulations and Game-Based Learning for Enterprise Resource Planning Implementation Training
Authors: Carin Chuang, Kuan-Chou Chen
Abstract:
An Enterprise Resource Planning (ERP) system is an integrated information system that supports the seamless integration of all the business processes of a company. Implementing an ERP system can increase efficiencies and decrease the costs while helping improve productivity. Many organizations including large, medium and small-sized companies have already adopted an ERP system for decades. Although ERP system can bring competitive advantages to organizations, the lack of proper training approach in ERP implementation is still a major concern. Organizations understand the importance of ERP training to adequately prepare managers and users. The low return on investment, however, for the ERP training makes the training difficult for knowledgeable workers to transfer what is learned in training to the jobs at workplace. Inadequate and inefficient ERP training limits the value realization and success of an ERP system. That is the need to call for a profound change and innovation for ERP training in both workplace at industry and the Information Systems (IS) education in academia. The innovated ERP training approach can improve the users’ knowledge in business processes and hands-on skills in mastering ERP system. It also can be instructed as educational material for IS students in universities. The purpose of the study is to examine the use of ERP simulation games via the ERPsim system to train the IS students in learning ERP implementation. The ERPsim is the business simulation game developed by ERPsim Lab at HEC Montréal, and the game is a real-life SAP (Systems Applications and Products) ERP system. The training uses the ERPsim system as the tool for the Internet-based simulation games and is designed as online student competitions during the class. The competitions involve student teams with the facilitation of instructor and put the students’ business skills to the test via intensive simulation games on a real-world SAP ERP system. The teams run the full business cycle of a manufacturing company while interacting with suppliers, vendors, and customers through sending and receiving orders, delivering products and completing the entire cash-to-cash cycle. To learn a range of business skills, student needs to adopt individual business role and make business decisions around the products and business processes. Based on the training experiences learned from rounds of business simulations, the findings show that learners have reduced risk in making mistakes that help learners build self-confidence in problem-solving. In addition, the learners’ reflections from their mistakes can speculate the root causes of the problems and further improve the efficiency of the training. ERP instructors teaching with the innovative approach report significant improvements in student evaluation, learner motivation, attendance, engagement as well as increased learner technology competency. The findings of the study can provide ERP instructors with guidelines to create an effective learning environment and can be transferred to a variety of other educational fields in which trainers are migrating towards a more active learning approach.Keywords: business simulations, ERP implementation training, ERPsim, game-based learning, instructional strategy, training innovation
Procedia PDF Downloads 139423 Deep Learning in Chest Computed Tomography to Differentiate COVID-19 from Influenza
Authors: Hongmei Wang, Ziyun Xiang, Ying liu, Li Yu, Dongsheng Yue
Abstract:
Intro: The COVID-19 (Corona Virus Disease 2019) has greatly changed the global economic, political and financial ecology. The mutation of the coronavirus in the UK in December 2020 has brought new panic to the world. Deep learning was performed on Chest Computed tomography (CT) of COVID-19 and Influenza and describes their characteristics. The predominant features of COVID-19 pneumonia was ground-glass opacification, followed by consolidation. Lesion density: most lesions appear as ground-glass shadows, and some lesions coexist with solid lesions. Lesion distribution: the focus is mainly on the dorsal side of the periphery of the lung, with the lower lobe of the lungs as the focus, and it is often close to the pleura. Other features it has are grid-like shadows in ground glass lesions, thickening signs of diseased vessels, air bronchi signs and halo signs. The severe disease involves whole bilateral lungs, showing white lung signs, air bronchograms can be seen, and there can be a small amount of pleural effusion in the bilateral chest cavity. At the same time, this year's flu season could be near its peak after surging throughout the United States for months. Chest CT for Influenza infection is characterized by focal ground glass shadows in the lungs, with or without patchy consolidation, and bronchiole air bronchograms are visible in the concentration. There are patchy ground-glass shadows, consolidation, air bronchus signs, mosaic lung perfusion, etc. The lesions are mostly fused, which is prominent near the hilar and two lungs. Grid-like shadows and small patchy ground-glass shadows are visible. Deep neural networks have great potential in image analysis and diagnosis that traditional machine learning algorithms do not. Method: Aiming at the two major infectious diseases COVID-19 and influenza, which are currently circulating in the world, the chest CT of patients with two infectious diseases is classified and diagnosed using deep learning algorithms. The residual network is proposed to solve the problem of network degradation when there are too many hidden layers in a deep neural network (DNN). The proposed deep residual system (ResNet) is a milestone in the history of the Convolutional neural network (CNN) images, which solves the problem of difficult training of deep CNN models. Many visual tasks can get excellent results through fine-tuning ResNet. The pre-trained convolutional neural network ResNet is introduced as a feature extractor, eliminating the need to design complex models and time-consuming training. Fastai is based on Pytorch, packaging best practices for in-depth learning strategies, and finding the best way to handle diagnoses issues. Based on the one-cycle approach of the Fastai algorithm, the classification diagnosis of lung CT for two infectious diseases is realized, and a higher recognition rate is obtained. Results: A deep learning model was developed to efficiently identify the differences between COVID-19 and influenza using chest CT.Keywords: COVID-19, Fastai, influenza, transfer network
Procedia PDF Downloads 142422 Resolving Urban Mobility Issues through Network Restructuring of Urban Mass Transport
Authors: Aditya Purohit, Neha Bansal
Abstract:
Unplanned urbanization and multidirectional sprawl of the cities have resulted in increased motorization and deteriorating transport conditions like traffic congestion, longer commuting, pollution, increased carbon footprint, and above all increased fatalities. In order to overcome these problems, various practices have been adopted including– promoting and implementing mass transport; traffic junction channelization; smart transport etc. However, these methods are found to be primarily focusing on vehicular mobility rather than people accessibility. With this research gap, this paper tries to resolve the mobility issues for Ahmedabad city in India, which being the economic capital Gujarat state has a huge commuter and visitor inflow. This research aims to resolve the traffic congestion and urban mobility issues focusing on Gujarat State Regional Transport Corporation (GSRTC) for the city of Ahmadabad by analyzing the existing operations and network structure of GSRTC followed by finding possibilities of integrating it with other modes of urban transport. The network restructuring (NR) methodology is used with appropriate variations, based on commuter demand and growth pattern of the city. To do these ‘scenarios’ based on priority issues (using 12 parameters) and their best possible solution, are established after route network analysis for 2700 population sample of 20 traffic junctions/nodes across the city. Approximately 5% sample (of passenger inflow) at each node is considered using random stratified sampling technique two scenarios are – Scenario 1: Resolving mobility issues by use of Special Purpose Vehicle (SPV) in joint venture to GSRTC and Private Operators for establishing feeder service, which shall provide a transfer service for passenger for movement from inner city area to identified peripheral terminals; and Scenario 2: Augmenting existing mass transport services such as BRTS and AMTS for using them as feeder service to the identified peripheral terminals. Each of these has now been analyzed for the best suitability/feasibility in network restructuring. A desire-line diagram is constructed using this analysis which indicated that on an average 62% of designated GSRTC routes are overlapping with mass transportation service routes of BRTS and AMTS in the city. This has resulted in duplication of bus services causing traffic congestion especially in the Central Bus Station (CBS). Terminating GSRTC services on the periphery of the city is found to be the best restructuring network proposal. This limits the GSRTC buses at city fringe area and prevents them from entering into the city core areas. These end-terminals of GSRTC are integrated with BRTS and AMTS services which help in segregating intra-state and inter-state bus services. The research concludes that absence of integrated multimodal transport network resulted in complexity of transport access to the commuters. As a further scope of research comparing and understanding of value of access time in total travel time and its implication on generalized cost on trip and how it varies city wise may be taken up.Keywords: mass transportation, multi-modal integration, network restructuring, travel behavior, urban transport
Procedia PDF Downloads 197421 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 227420 Financing the Welfare State in the United States: The Recent American Economic and Ideological Challenges
Authors: Rafat Fazeli, Reza Fazeli
Abstract:
This paper focuses on the study of the welfare state and social wage in the leading liberal economy of the United States. The welfare state acquired a broad acceptance as a major socioeconomic achievement of the liberal democracy in the Western industrialized countries during the postwar boom period. The modern and modified vision of capitalist democracy offered, on the one hand, the possibility of high growth rate and, on the other hand, the possibility of continued progression of a comprehensive system of social support for a wider population. The economic crises of the 1970s, provided the ground for a great shift in economic policy and ideology in several Western countries, most notably the United States and the United Kingdom (and to a lesser extent Canada under Prime Minister Brian Mulroney). In the 1980s, the free market oriented reforms undertaken under Reagan and Thatcher greatly affected the economic outlook not only of the United States and the United Kingdom, but of the whole Western world. The movement which was behind this shift in policy is often called neo-conservatism. The neoconservatives blamed the transfer programs for the decline in economic performance during the 1970s and argued that cuts in spending were required to go back to the golden age of full employment. The agenda for both Reagan and Thatcher administrations was rolling back the welfare state, and their budgets included a wide range of cuts for social programs. The question is how successful were Reagan and Thatcher’s efforts to achieve retrenchment? The paper involves an empirical study concerning the distributive role of the welfare state in the two countries. Other studies have often concentrated on the redistributive effect of fiscal policy on different income brackets. This study examines the net benefit/ burden position of the working population with respect to state expenditures and taxes in the postwar period. This measurement will enable us to find out whether the working population has received a net gain (or net social wage). This study will discuss how the expansion of social expenditures and the trend of the ‘net social wage’ can be linked to distinct forms of economic and social organizations. This study provides an empirical foundation for analyzing the growing significance of ‘social wage’ or the collectivization of consumption and the share of social or collective consumption in total consumption of the working population in the recent decades. The paper addresses three other major questions. The first question is whether the expansion of social expenditures has posed any drag on capital accumulation and economic growth. The findings of this study provide an analytical foundation to evaluate the neoconservative claim that the welfare state is itself the source of economic stagnation that leads to the crisis of the welfare state. The second question is whether the increasing ideological challenges from the right and the competitive pressures of globalization have led to retrenchment of the American welfare states in the recent decades. The third question is how social policies have performed in the presence of the rising inequalities in the recent decades.Keywords: the welfare state, social wage, The United States, limits to growth
Procedia PDF Downloads 209419 Geographical Information System and Multi-Criteria Based Approach to Locate Suitable Sites for Industries to Minimize Agriculture Land Use Changes in Bangladesh
Authors: Nazia Muhsin, Tofael Ahamed, Ryozo Noguchi, Tomohiro Takigawa
Abstract:
One of the most challenging issues to achieve sustainable development on food security is land use changes. The crisis of lands for agricultural production mainly arises from the unplanned transformation of agricultural lands to infrastructure development i.e. urbanization and industrialization. Land use without sustainability assessment could have impact on the food security and environmental protections. Bangladesh, as the densely populated country with limited arable lands is now facing challenges to meet sustainable food security. Agricultural lands are using for economic growth by establishing industries. The industries are spreading from urban areas to the suburban areas and using the agricultural lands. To minimize the agricultural land losses for unplanned industrialization, compact economic zones should be find out in a scientific approach. Therefore, the purpose of the study was to find out suitable sites for industrial growth by land suitability analysis (LSA) by using Geographical Information System (GIS) and multi-criteria analysis (MCA). The goal of the study was to emphases both agricultural lands and industries for sustainable development in land use. The study also attempted to analysis the agricultural land use changes in a suburban area by statistical data of agricultural lands and primary data of the existing industries of the study place. The criteria were selected as proximity to major roads, and proximity to local roads, distant to rivers, waterbodies, settlements, flood-flow zones, agricultural lands for the LSA. The spatial dataset for the criteria were collected from the respective departments of Bangladesh. In addition, the elevation spatial dataset were used from the SRTM (Shuttle Radar Topography Mission) data source. The criteria were further analyzed with factors and constraints in ArcGIS®. Expert’s opinion were applied for weighting the criteria according to the analytical hierarchy process (AHP), a multi-criteria technique. The decision rule was set by using ‘weighted overlay’ tool to aggregate the factors and constraints with the weights of the criteria. The LSA found only 5% of land was most suitable for industrial sites and few compact lands for industrial zones. The developed LSA are expected to help policy makers of land use and urban developers to ensure the sustainability of land uses and agricultural production.Keywords: AHP (analytical hierarchy process), GIS (geographic information system), LSA (land suitability analysis), MCA (multi-criteria analysis)
Procedia PDF Downloads 263418 Inhabitants’ Adaptation to the Climate's Evolutions in Cities: a Survey of City Dwellers’ Climatic Experiences’ Construction
Authors: Geraldine Molina, Malou Allagnat
Abstract:
Entry through meteorological and climatic phenomena, technical knowledge and engineering sciences has long been favored by the research and local public action to analyze the urban climate, develop strategies to reduce its changes and adapt their spaces. However, in their daily practices and sensitive experiences, city dwellers are confronted with the climate and constantly deal with its fluctuations. In this way, these actors develop knowledge, skills and tactics to regulate their comfort and adapt to climatic variations. Therefore, the empirical observation and analysis of these living experiences represent major scientific and social challenges. This contribution proposes to question these relationships of the inhabitants to urban climate. It tackles the construction of inhabitants’ climatic experiences to answer a central question: how do city dwellers’ deal with the urban climate and adapt to its different variations? Indeed, the city raises the question of how populations adapt to different spatial and temporal climatic variations. Local impacts of global climate change are combined with the urban heat island phenomenon and other microclimatic effects, as well as seasonal, daytime and night-time fluctuations. To provide answers, the presentation will be focused on the results of a CNRS research project (Géraldine Molina), part of which is linked to the European project Nature For Cities (H2020, Marjorie Musy, Scientific Director). From a theoretical point of view, the contribution is based on a renewed definition of adaptation centered on the capacity of individuals and social groups, a recently opened entry from a theoretical point of view by social scientists. The research adopts a "radical interdisciplinary" approach to shed light on the links between social dynamics of climate (inhabitants’ perceptions, representations and practices) and physical processes that characterize urban climate. To do so, it relied on a methodological combination of different survey techniques borrowed from the social sciences (geography, anthropology, sociology) and linked to the work, methodologies and results of the engineering sciences. From 2016 to 2019, a survey was carried out in two districts of Lyon whose morphological, micro-climatic and social characteristics differ greatly, namely the 6th arrondissement and the Guillotière district. To explore the construction of climate experiences over the long term by putting it into perspective with the life trajectories of individuals, 70 semi-directive interviews were conducted with inhabitants. In order to also punctually survey the climate experiments as they unfold in a given time and moment, observation and measurement campaigns of physical phenomena and questionnaires have been conducted in public spaces by an interdisciplinary research team1. The contribution at the ICUC 2020 will mainly focus on the presentation of the presentation of the qualitative survey conducted thanks to the inhabitants’ interviews.Keywords: sensitive experiences, ways of life, thermal comfort, radical interdisciplinarity
Procedia PDF Downloads 118417 Management and Genetic Characterization of Local Sheep Breeds for Better Productive and Adaptive Traits
Authors: Sonia Bedhiaf-Romdhani
Abstract:
The sheep (Ovis aries) was domesticated, approximately 11,000 years ago (YBP), in the Fertile Crescent from Asian Mouflon (Ovis Orientalis). The Northern African (NA) sheep is 7,000 years old, represents a remarkable diversity of sheep populations reared under traditional and low input farming systems (LIFS) over millennia. The majority of small ruminants in developing countries are encountered in low input production systems and the resilience of local communities in rural areas is often linked to the wellbeing of small ruminants. Regardless of the rich biodiversity encountered in sheep ecotypes there are four main sheep breeds in the country with 61,6 and 35.4 percents of Barbarine (fat tail breed) and Queue Fine de l’Ouest (thin tail breed), respectively. Phoenicians introduced the Barbarine sheep from the steppes of Central Asia in the Carthaginian period, 3000 years ago. The Queue Fine de l’Ouest is a thin-tailed meat breed heavily concentrated in the Western and the central semi-arid regions. The Noire de Thibar breed, involving mutton-fine wool producing animals, has been on the verge of extinction, it’s a composite black coated sheep breed found in the northern sub-humid region because of its higher nutritional requirements and non-tolerance of the prevailing harsher condition. The D'Man breed, originated from Morocco, is mainly located in the southern oases of the extreme arid ecosystem. A genetic investigation of Tunisian sheep breeds using a genome-wide scan of approximately 50,000 SNPs was performed. Genetic analysis of relationship between breeds highlighted the genetic differentiation of Noire de Thibar breed from the other local breeds, reflecting the effect of past events of introgression of European gene pool. The Queue Fine de l’Ouest breed showed a genetic heterogeneity and was close to Barbarine. The D'Man breed shared a considerable gene flow with the thin-tailed Queue Fine de l'Ouest breed. Native small ruminants breeds, are capable to be efficiently productive if essential ingredients and coherent breeding schemes are implemented and followed. Assessing the status of genetic variability of native sheep breeds could provide important clues for research and policy makers to devise better strategies for the conservation and management of genetic resources.Keywords: sheep, farming systems, diversity, SNPs.
Procedia PDF Downloads 147416 Totally Implantable Venous Access Device for Long Term Parenteral Nutrition in a Patient with High Output Enterocutaneous Fistula Due to Advanced Malignancy
Authors: Puneet Goyal, Aarti Agarwal
Abstract:
Background and Objective: Nutritional support is an integral part of palliative care of advanced non-resectable abdominal malignancy patients, though is frequently neglected aspect. Non-Healing high output Entero-cutaneous fistulas sometimes require long term parenteral nutrition, to take care of catabolism and replacement of nutrients. We present a case of inoperable pancreatic malignancy with high output entero-cutaneous fistula, which was provided parenteral nutritional support with the use of Totally Implantable Venous Access Device (TIVAD). Method and Results: 55 year old man diagnosed with carcinoma pancreas had developed high entero-cutaneous fistula. His tumor was found to be inoperable and was on total parenteral nutrition through routine central line. This line was difficult to maintain as he required it for a long term TPN. He was planned to undergo Totally Implantable Venous Access Device (TIVAD) implantation. 8Fr single lumen catheter with Groshong non-return Valve (Bard Access Systems, Inc. USA) was inserted through right internal jugular vein, under fluoroscopic guidance. The catheter was tunneled subcutaneously and brought towards infraclavicular pocket, cut at appropriate length and connected to port and locked. Port was sutured in floor of pocket. Free flow of blood aspirated, flushed with heparinized saline. There was no kink observed in entire length of catheter under fluoroscopy. Skin over infraclavicular pocket was sutured. Long term catheter care and associated risks were explained to patient and relatives. Patient continued to receive total parenteral nutrition as well as other supportive therapy though TIVAD for next 6 weeks, till his demise. Conclusion: TIVADs are standard of care for long term venous access solutions in cancer patients requiring chemotherapy. In this case, we extended its use for providing parenteral nutrition and other supportive therapy. TIVADs can be implanted in advanced cancer patients for providing venous access solution required for various palliative treatments and medications. This will help in improving quality of life and satisfaction amongst terminally ill cancer patients.Keywords: parenteral nutrition, totally implantable venous access device, long term venous access, interventions in anesthesiology
Procedia PDF Downloads 247415 O-Functionalized CNT Mediated CO Hydro-Deoxygenation and Chain Growth
Authors: K. Mondal, S. Talapatra, M. Terrones, S. Pokhrel, C. Frizzel, B. Sumpter, V. Meunier, A. L. Elias
Abstract:
Worldwide energy independence is reliant on the ability to leverage locally available resources for fuel production. Recently, syngas produced through gasification of carbonaceous materials provided a gateway to a host of processes for the production of various chemicals including transportation fuels. The basis of the production of gasoline and diesel-like fuels is the Fischer Tropsch Synthesis (FTS) process: A catalyzed chemical reaction that converts a mixture of carbon monoxide (CO) and hydrogen (H2) into long chain hydrocarbons. Until now, it has been argued that only transition metal catalysts (usually Co or Fe) are active toward the CO hydrogenation and subsequent chain growth in the presence of hydrogen. In this paper, we demonstrate that carbon nanotube (CNT) surfaces are also capable of hydro-deoxygenating CO and producing long chain hydrocarbons similar to that obtained through the FTS but with orders of magnitude higher conversion efficiencies than the present state-of-the-art FTS catalysts. We have used advanced experimental tools such as XPS and microscopy techniques to characterize CNTs and identify C-O functional groups as the active sites for the enhanced catalytic activity. Furthermore, we have conducted quantum Density Functional Theory (DFT) calculations to confirm that C-O groups (inherent on CNT surfaces) could indeed be catalytically active towards reduction of CO with H2, and capable of sustaining chain growth. The DFT calculations have shown that the kinetically and thermodynamically feasible route for CO insertion and hydro-deoxygenation are different from that on transition metal catalysts. Experiments on a continuous flow tubular reactor with various nearly metal-free CNTs have been carried out and the products have been analyzed. CNTs functionalized by various methods were evaluated under different conditions. Reactor tests revealed that the hydrogen pre-treatment reduced the activity of the catalysts to negligible levels. Without the pretreatment, the activity for CO conversion as found to be 7 µmol CO/g CNT/s. The O-functionalized samples showed very activities greater than 85 µmol CO/g CNT/s with nearly 100% conversion. Analyses show that CO hydro-deoxygenation occurred at the C-O/O-H functional groups. It was found that while the products were similar to FT products, differences in selectivities were observed which, in turn, was a result of a different catalytic mechanism. These findings now open a new paradigm for CNT-based hydrogenation catalysts and constitute a defining point for obtaining clean, earth abundant, alternative fuels through the use of efficient and renewable catalyst.Keywords: CNT, CO Hydrodeoxygenation, DFT, liquid fuels, XPS, XTL
Procedia PDF Downloads 347414 Coupling of Microfluidic Droplet Systems with ESI-MS Detection for Reaction Optimization
Authors: Julia R. Beulig, Stefan Ohla, Detlev Belder
Abstract:
In contrast to off-line analytical methods, lab-on-a-chip technology delivers direct information about the observed reaction. Therefore, microfluidic devices make an important scientific contribution, e.g. in the field of synthetic chemistry. Herein, the rapid generation of analytical data can be applied for the optimization of chemical reactions. These microfluidic devices enable a fast change of reaction conditions as well as a resource saving method of operation. In the presented work, we focus on the investigation of multiphase regimes, more specifically on a biphasic microfluidic droplet systems. Here, every single droplet is a reaction container with customized conditions. The biggest challenge is the rapid qualitative and quantitative readout of information as most detection techniques for droplet systems are non-specific, time-consuming or too slow. An exception is the electrospray mass spectrometry (ESI-MS). The combination of a reaction screening platform with a rapid and specific detection method is an important step in droplet-based microfluidics. In this work, we present a novel approach for synthesis optimization on the nanoliter scale with direct ESI-MS detection. The development of a droplet-based microfluidic device, which enables the modification of different parameters while simultaneously monitoring the effect on the reaction within a single run, is shown. By common soft- and photolithographic techniques a polydimethylsiloxane (PDMS) microfluidic chip with different functionalities is developed. As an interface for the MS detection, we use a steel capillary for ESI and improve the spray stability with a Teflon siphon tubing, which is inserted underneath the steel capillary. By optimizing the flow rates, it is possible to screen parameters of various reactions, this is exemplarity shown by a Domino Knoevenagel Hetero-Diels-Alder reaction. Different starting materials, catalyst concentrations and solvent compositions are investigated. Due to the high repetition rate of the droplet production, each set of reaction condition is examined hundreds of times. As a result, of the investigation, we receive possible reagents, the ideal water-methanol ratio of the solvent and the most effective catalyst concentration. The developed system can help to determine important information about the optimal parameters of a reaction within a short time. With this novel tool, we make an important step on the field of combining droplet-based microfluidics with organic reaction screening.Keywords: droplet, mass spectrometry, microfluidics, organic reaction, screening
Procedia PDF Downloads 301413 Placenta A Classical Caesarean Section with Peripartum Hysterectomy at 27+3 Weeks Gestation For Placnta Accreta
Authors: Huda Abdelrhman Osman Ahmed, Paul Feyi Waboso
Abstract:
Introduction: Placenta accreta spectrum (PAS) disorders present a significant challenge in obstetric management due to the high risk of hemorrhage and potential complications at delivery. This case describes a 27+3 weeks gestation in a patient with placenta accreta managed with classical cesarean section and peripartum hysterectomy. Case Description: AGravida 4P3 patient presented at 27+3 weeks gestation with painless, unprovoked vaginal bleeding and an estimated blood loss (EBL) of 300 mL. At the 20+5 week anomaly scan, a placenta previa was identified anterior, covering the os anterior uterus and containing lacunae with signs of myometrial thinning. At a 24+1 week scan conducted at a tertiary center, further imaging indicated placenta increta with invasion into the myometrium and potential areas of placenta percreta. The patient’s past obstetric history included three previous cesarean sections, with no significant medical or surgical history. Social history revealed heavy smoking but no alcohol use. No drug allergies were reported. Given the risks associated with PAS, a management plan was formulated, including an MRI at a later stage and cesarean delivery with a possible hysterectomy between 34-36 weeks. However, at 27+3 weeks, the patient experienced another episode of vaginal bleeding EBL 500 ml, necessitating immediate intervention. Management: As the patient was unstable, she was not transferred to the tertiary center. Completed and informed consent was obtained. MDT planning-group and cross-matching 4 units, uterotonics. Tranexamic acid blood products, cryo, cell salvage, 2 obstetric consultants and an anesthetic consultant, blood bank aware and hematologist. HDU bed and ITU availability. This study assisted in performing a classical Caesarean section, Where the urologist inserted JJ ureteric stents. Following this, we also assisted in a total abdominal hysterectomy with the conservation of ovaries. 4 units RBC and 1 unit FFP were transfused. The total blood loss was 2.3 L. Outcome: The procedure successfully achieved hemostasis, and the neonate was delivered with subsequent transfer to a neonatal intensive care unit for management. The patient’s postoperative course was monitored closely with no immediate complications. Discussion: This case highlights the complexity and urgency in managing placenta accreta spectrum disorders, particularly with the added challenges posed by remote location and limited tertiary support. The need for rapid decision-making and interdisciplinary coordination is emphasized in such high-risk obstetric cases. The case also underscores the potential for surgical intervention and the importance of family involvement in emergent care decisions. Conclusion: Placenta accreta spectrum disorders demand meticulous planning and timely intervention. This case contributes to understanding PAS management at earlier gestational ages and provides insights into the challenges posed by access to tertiary care, especially in urgent situations.Keywords: Accreta, Hysterectomy, 3MDT, prematurity
Procedia PDF Downloads 10412 MEIOSIS: Museum Specimens Shed Light in Biodiversity Shrinkage
Authors: Zografou Konstantina, Anagnostellis Konstantinos, Brokaki Marina, Kaltsouni Eleftheria, Dimaki Maria, Kati Vassiliki
Abstract:
Body size is crucial to ecology, influencing everything from individual reproductive success to the dynamics of communities and ecosystems. Understanding how temperature affects variations in body size is vital for both theoretical and practical purposes, as changes in size can modify trophic interactions by altering predator-prey size ratios and changing the distribution and transfer of biomass, which ultimately impacts food web stability and ecosystem functioning. Notably, a decrease in body size is frequently mentioned as the third "universal" response to climate warming, alongside shifts in distribution and changes in phenology. This trend is backed by ecological theories like the temperature-size rule (TSR) and Bergmann's rule, which have been observed in numerous species, indicating that many species are likely to shrink in size as temperatures rise. However, the thermal responses related to body size are still contradictory, and further exploration is needed. To tackle this challenge, we developed the MEIOSIS project, aimed at providing valuable insights into the relationship between the body size of species, species’ traits, environmental factors, and their response to climate change. We combined a digitized collection of butterflies from the Swiss Federal Institute of Technology in Zürich with our newly digitized butterfly collection from Goulandris Natural History Museum in Greece to analyse trends in time. For a total of 23868 images, the length of the right forewing was measured using ImageJ software. Each forewing was measured from the point at which the wing meets the thorax to the apex of the wing. The forewing length of museum specimens has been shown to have a strong correlation with wing surface area and has been utilized in prior studies as a proxy for overall body size. Temperature data corresponding to the years of collection were also incorporated into the datasets. A second dataset was generated when a custom computer vision tool was implemented for the automated morphological measuring of samples for the digitized collection in Zürich. Using the second dataset, we corrected manual measurements with ImageJ, and a final dataset containing 31922 samples was used for analysis. Setting time as a smoother variable, species identity as a random factor, and the length of right-wing size (a proxy for body size) as the response variable, we ran a global model for a maximum period of 110 years (1900 – 2010). Then, we investigated functional variability between different terrestrial biomes in a second model. Both models confirmed our initial hypothesis and resulted in a decreasing trend in body size over the years. We expect that this first output can be provided as basic data for the next challenge, i.e., to identify the ecological traits that influence species' temperature-size responses, enabling us to predict the direction and intensity of a species' reaction to rising temperatures more accurately.Keywords: butterflies, shrinking body size, museum specimens, climate change
Procedia PDF Downloads 10411 Effect of Spermidine on Physicochemical Properties of Protein Based Films
Authors: Mohammed Sabbah, Prospero Di Pierro, Raffaele Porta
Abstract:
Protein-based edible films and coatings have attracted an increasing interest in recent years since they might be used to protect pharmaceuticals or improve the shelf life of different food products. Among them, several plant proteins represent an abundant, inexpensive and renewable raw source. These natural biopolymers are used as film forming agents, being able to form intermolecular linkages by various interactions. However, without the addition of a plasticizing agent, many biomaterials are brittle and, consequently, very difficult to be manipulated. Plasticizers are generally small and non-volatile organic additives used to increase film extensibility and reduce its crystallinity, brittleness and water vapor permeability. Plasticizers normally act by decreasing the intermolecular forces along the polymer chains, thus reducing the relative number of polymer-polymer contacts, producing a decrease in cohesion and tensile strength and thereby increasing film flexibility allowing its deformation without rupture. The most commonly studied plasticizers are polyols, like glycerol (GLY) and some mono or oligosaccharides. In particular, GLY not only increases film extensibility but also migrates inside the film network often causing the loss of desirable mechanical properties of the material. Therefore, replacing GLY with a different plasticizer might help to improve film characteristics allowing potential industrial applications. To improve film properties, it seemed of interest to test as plasticizers some cationic small molecules like polyamines (PAs). Putrescine, spermidine (SPD), and spermine are PAs widely distributed in nature and of particular interest for their biological activities that may have some beneficial health effects. Since PAs contains amino instead of hydroxyl functional groups, they are able to trigger ionic interactions with negatively charged proteins. Bitter vetch (Vicia ervilia; BV) is an ancient grain legume crop, originated in the Mediterranean region, which can be found today in many countries around the world. This annual Vicia genus shows several favorable features, being their seeds a cheap and abundant protein source. The main objectives of this study were to investigate the effect of different concentrations of SPD on the mechanical and permeability properties of films prepared with native or heat denatured BV proteins in the presence of different concentrations of SPD and/or GLY. Therefore, a BV seed protein concentrate (BVPC), containing about 77% proteins, was used to prepare film forming solutions (FFSs), whereas GLY and SPD were added as film plasticizers, either singly or in combination, at various concentrations. Since a primary plasticizer is generally defined as a molecule that when added to a material makes it softer, more flexible and easier to be processed, our findings lead to consider SPD as a possible primary plasticizer of protein-based films. In fact, the addition of millimolar concentrations of SPD to BVPC FFS allowed obtaining handleable biomaterials with improved properties. Moreover, SPD can be also considered as a secondary plasticizer, namely an 'extender', because of its ability even to enhance the plasticizing performance of GLY. In conclusion, our studies indicate that innovative edible protein-based films and coatings can be obtained by using PAs as new plasticizers.Keywords: edible films, glycerol, plasticizers, polyamines, spermidine
Procedia PDF Downloads 197410 Optical Assessment of Marginal Sealing Performance around Restorations Using Swept-Source Optical Coherence Tomography
Authors: Rima Zakzouk, Yasushi Shimada, Yasunori Sumi, Junji Tagami
Abstract:
Background and purpose: The resin composite has become the main material for the restorations of caries in recent years due to aesthetic characteristics, especially with the development of the adhesive techniques. The quality of adhesion to tooth structures is depending on an exchange process between inorganic tooth material and synthetic resin and a micromechanical retention promoted by resin infiltration in partially demineralized dentin. Optical coherence tomography (OCT) is a noninvasive diagnostic method for obtaining cross-sectional images that produce high-resolution of the biological tissue at the micron scale. The aim of this study was to evaluate the gap formation at adhesive/tooth interface of two-step self-etch adhesives that are preceded with or without phosphoric acid pre-etching in different regions of teeth using SS-OCT. Materials and methods: Round tapered cavities (2×2 mm) were prepared in cervical part of bovine incisors teeth and divided into 2 groups (n=10): first group self-etch adhesive (Clearfil SE Bond) was applied for SE group and second group treated with acid etching before applying the self-etch adhesive for PA group. Subsequently, both groups were restored with Estelite Flow Quick Flowable Composite Resin and observed under OCT. Following 5000 thermal cycles, the same section was obtained again for each cavity using OCT at 1310-nm wavelength. Scanning was repeated after two months to monitor the gap progress. Then the gap length was measured using image analysis software, and the statistics analysis were done between both groups using SPSS software. After that, the cavities were sectioned and observed under Confocal Laser Scanning Microscope (CLSM) to confirm the result of OCT. Results: Gaps formed at the bottom of the cavity was longer than the gap formed at the margin and dento-enamel junction in both groups. On the other hand, pre-etching treatment led to damage the DEJ regions creating longer gap. After 2 months the results showed almost progress in the gap length significantly at the bottom regions in both groups. In conclusions, phosphoric acid etching treatment did not reduce the gap lrngth in most regions of the cavity. Significance: The bottom region of tooth was more exposed to gap formation than margin and DEJ regions, The DEJ damaged with phosphoric acid treatment.Keywords: optical coherence tomography, self-etch adhesives, bottom, dento enamel junction
Procedia PDF Downloads 227409 Analysis of Fish Preservation Methods for Traditional Fishermen Boat
Authors: Kusno Kamil, Andi Asni, Sungkono
Abstract:
According to a report of the World Food and Agriculture Agency (FAO): the post-harvest fish losses in Indonesia reaches 30 percent from 170 trillion rupiahs of marine fisheries reserves, then the potential loss reaches 51 trillion rupiahs (end of 2016 data). This condition is caused by traditionally vulnerable fish catches damaged due to disruption of the cold chain of preservation. The physical and chemical changes in fish flesh increase rapidly, especially if exposed to the scorching heat in the middle of the sea, exacerbated by the low awareness of catch hygiene; many unclean catches which contain blood are often treated without special attention and mixed with freshly caught fish, thereby increasing the potential for faster fish spoilage. This background encourages research on traditional fisherman catch preservation methods that aim to find the best and most affordable methods and/or combinations of fish preservation methods so that they can help fishermen increase their fishing duration without worrying that their catch will be damaged, thereby reducing their economic value when returning to the beach to sell their catches. This goal is expected to be achieved through experimental methods of treatment of fresh fish catches in containers with the addition of anti-bacterial copper, liquid smoke solution, and the use of vacuum containers. The other three treatments combined the three previous treatment variables with an electrically powered cooler (temperature 0~4 ᵒC). As a control specimen, the untreated fresh fish (placed in the open air and in the refrigerator) were also prepared for comparison for 1, 3, and 6 days. To test the level of freshness of fish for each treatment, physical observations were used, which were complemented by tests for bacterial content in a trusted laboratory. The content of copper (Cu) in fish meat (which is suspected of having a negative impact on consumers) was also part of the examination on the 6th day of experimentation. The results of physical observations on the test specimens (organoleptic method) showed that preservation assisted by the use of coolers was still better for all treatment variables. The specimens, without cooling, sequentially showed that the best preservation effectiveness was the addition of copper plates, the use of vacuum containers, and then liquid smoke immersion. Especially for liquid smoke, soaking for 6 days of preservation makes the fish meat soft and easy to crumble, even though it doesn't have a bad odor. The visual observation was then complemented by the results of testing the amount of growth (or retardation) of putrefactive bacteria in each treatment of test specimens within similar observation periods. Laboratory measurements report that the minimum amount of putrefactive bacteria achieved by preservation treatment combining cooler with liquid smoke (sample A+), then cooler only (D+), copper layer inside cooler (B+), vacuum container inside cooler (C+), respectively. Other treatments in open air produced a hundred times more putrefactive bacteria. In addition, treatment of the copper layer contaminated the preserved fresh fish more than a thousand times bigger compared to the initial amount, from 0.69 to 1241.68 µg/g.Keywords: fish, preservation, traditional, fishermen, boat
Procedia PDF Downloads 69408 Scalable UI Test Automation for Large-scale Web Applications
Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani
Abstract:
This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.Keywords: aws, elastic container service, scalability, serverless, ui automation test
Procedia PDF Downloads 106407 Determining the Thermal Performance and Comfort Indices of a Naturally Ventilated Room with Reduced Density Reinforced Concrete Wall Construction over Conventional M-25 Grade Concrete
Authors: P. Crosby, Shiva Krishna Pavuluri, S. Rajkumar
Abstract:
Purpose: Occupied built-up space can be broadly classified as air-conditioned and naturally ventilated. Regardless of the building type, the objective of all occupied built-up space is to provide a thermally acceptable environment for human occupancy. Considering this aspect, air-conditioned spaces allow a greater degree of flexibility to control and modulate the comfort parameters during the operation phase. However, in the case of naturally ventilated space, a number of design features favoring indoor thermal comfort should be mandatorily conceptualized starting from the design phase. One such primary design feature that requires to be prioritized is, selection of building envelope material, as it decides the flow of energy from outside environment to occupied spaces. Research Methodology: In India and many countries across globe, the standardized material used for building envelope is re-enforced concrete (i.e. M-25 grade concrete). The comfort inside the RC built environment for warm & humid climate (i.e. mid-day temp of 30-35˚C, diurnal variation of 5-8˚C & RH of 70-90%) is unsatisfying to say the least. This study is mainly focused on reviewing the impact of mix design of conventional M25 grade concrete on inside thermal comfort. In this mix design, air entrainment in the range of 2000 to 2100 kg/m3 is introduced to reduce the density of M-25 grade concrete. Thermal performance parameters & indoor comfort indices are analyzed for the proposed mix and compared in relation to the conventional M-25 grade. There are diverse methodologies which govern indoor comfort calculation. In this study, three varied approaches specifically a) Indian Adaptive Thermal comfort model, b) Tropical Summer Index (TSI) c) Air temperature less than 33˚C & RH less than 70% to calculate comfort is adopted. The data required for the thermal comfort study is acquired by field measurement approach (i.e. for the new mix design) and simulation approach by using design builder (i.e. for the conventional concrete grade). Findings: The analysis points that the Tropical Summer Index has a higher degree of stringency in determining the occupant comfort band whereas also providing a leverage in thermally tolerable band over & above other methodologies in the context of the study. Another important finding is the new mix design ensures a 10% reduction in indoor air temperature (IAT) over the outdoor dry bulb temperature (ODBT) during the day. This translates to a significant temperature difference of 6 ˚C IAT and ODBT.Keywords: Indian adaptive thermal comfort, indoor air temperature, thermal comfort, tropical summer index
Procedia PDF Downloads 320406 Automated Building Internal Layout Design Incorporating Post-Earthquake Evacuation Considerations
Authors: Sajjad Hassanpour, Vicente A. González, Yang Zou, Jiamou Liu
Abstract:
Earthquakes pose a significant threat to both structural and non-structural elements in buildings, putting human lives at risk. Effective post-earthquake evacuation is critical for ensuring the safety of building occupants. However, current design practices often neglect the integration of post-earthquake evacuation considerations into the early-stage architectural design process. To address this gap, this paper presents a novel automated internal architectural layout generation tool that optimizes post-earthquake evacuation performance. The tool takes an initial plain floor plan as input, along with specific requirements from the user/architect, such as minimum room dimensions, corridor width, and exit lengths. Based on these inputs, firstly, the tool randomly generates different architectural layouts. Secondly, the human post-earthquake evacuation behaviour will be thoroughly assessed for each generated layout using the advanced Agent-Based Building Earthquake Evacuation Simulation (AB2E2S) model. The AB2E2S prototype is a post-earthquake evacuation simulation tool that incorporates variables related to earthquake intensity, architectural layout, and human factors. It leverages a hierarchical agent-based simulation approach, incorporating reinforcement learning to mimic human behaviour during evacuation. The model evaluates different layout options and provides feedback on evacuation flow, time, and possible casualties due to earthquake non-structural damage. By integrating the AB2E2S model into the automated layout generation tool, architects and designers can obtain optimized architectural layouts that prioritize post-earthquake evacuation performance. Through the use of the tool, architects and designers can explore various design alternatives, considering different minimum room requirements, corridor widths, and exit lengths. This approach ensures that evacuation considerations are embedded in the early stages of the design process. In conclusion, this research presents an innovative automated internal architectural layout generation tool that integrates post-earthquake evacuation simulation. By incorporating evacuation considerations into the early-stage design process, architects and designers can optimize building layouts for improved post-earthquake evacuation performance. This tool empowers professionals to create resilient designs that prioritize the safety of building occupants in the face of seismic events.Keywords: agent-based simulation, automation in design, architectural layout, post-earthquake evacuation behavior
Procedia PDF Downloads 104405 Effects of the Exit from Budget Support on Good Governance: Findings from Four Sub-Saharan Countries
Authors: Magdalena Orth, Gunnar Gotz
Abstract:
Background: Domestic accountability, budget transparency and public financial management (PFM) are considered vital components of good governance in developing countries. The aid modality budget support (BS) promotes these governance functions in developing countries. BS engages in political decision-making and provides financial and technical support to poverty reduction strategies of the partner countries. Nevertheless, many donors have withdrawn their support from this modality due to cases of corruption, fraud or human rights violations. This exit from BS is leaving a finance and governance vacuum in the countries. The evaluation team analyzed the consequences of terminating the use of this modality and found particularly negative effects for good governance outcomes. Methodology: The evaluation uses a qualitative (theory-based) approach consisting of a comparative case study design, which is complemented by a process-tracing approach. For the case studies, the team conducted over 100 semi-structured interviews in Malawi, Uganda, Rwanda and Zambia and used four country-specific, tailor-made budget analysis. In combination with a previous DEval evaluation synthesis on the effects of BS, the team was able to create a before-and-after comparison that yields causal effects. Main Findings: In all four countries domestic accountability and budget transparency declined if other forms of pressure are not replacing BS´s mutual accountability mechanisms. In Malawi a fraud scandal created pressure from the society and from donors so that accountability was improved. In the other countries, these pressure mechanisms were absent so that domestic accountability declined. BS enables donors to actively participate in political processes of the partner country as donors transfer funds into the treasury of the partner country and conduct a high-level political dialogue. The results confirm that the exit from BS created a governance vacuum that, if not compensated through external/internal pressure, leads to a deterioration of good governance. For example, in the case of highly aid dependent Malawi did the possibility of a relaunch of BS provide sufficient incentives to push for governance reforms. Overall the results show that the three good governance areas are negatively affected by the exit from BS. This stands in contrast to positive effects found before the exit. The team concludes that the relationship is causal, because the before-and-after comparison coherently shows that the presence of BS correlates with positive effects and the absence with negative effects. Conclusion: These findings strongly suggest that BS is an effective modality to promote governance and its abolishment is likely to cause governance disruptions. Donors and partner governments should find ways to re-engage in closely coordinated policy-based aid modalities. In addition, a coordinated and carefully managed exit-strategy should be in place before an exit from similar modalities is considered. Particularly a continued framework of mutual accountability and a high-level political dialogue should be aspired to maintain pressure and oversight that is required to achieve good governance.Keywords: budget support, domestic accountability, public financial management and budget transparency, Sub-Sahara Africa
Procedia PDF Downloads 151404 Characterization of Dota-Girentuximab Conjugates for Radioimmunotherapy
Authors: Tais Basaco, Stefanie Pektor, Josue A. Moreno, Matthias Miederer, Andreas Türler
Abstract:
Radiopharmaceuticals based in monoclonal anti-body (mAb) via chemical linkers have become a potential tool in nuclear medicine because of their specificity and the large variability and availability of therapeutic radiometals. It is important to identify the conjugation sites and number of attached chelator to mAb to obtain radioimmunoconjugates with required immunoreactivity and radiostability. Girentuximab antibody (G250) is a potential candidate for radioimmunotherapy of clear cell carcinomas (RCCs) because it is reactive with CAIX antigen, a transmembrane glycoprotein overexpressed on the cell surface of most ( > 90%) (RCCs). G250 was conjugated with the bifunctional chelating agent DOTA (1,4,7,10-Tetraazacyclododecane-N,N’,N’’,N’’’-tetraacetic acid) via a benzyl-thiocyano group as a linker (p-SCN-Bn-DOTA). DOTA-G250 conjugates were analyzed by size exclusion chromatography (SE-HPLC) and by electrophoresis (SDS-PAGE). The potential site-specific conjugation was identified by liquid chromatography–mass spectrometry (LC/MS-MS) and the number of linkers per molecule of mAb was calculated using the molecular weight (MW) measured by matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS). The average number obtained in the conjugates in non-reduced conditions was between 8-10 molecules of DOTA per molecule of mAb. The average number obtained in the conjugates in reduced conditions was between 1-2 and 3-4 molecules of DOTA per molecule of mAb in the light chain (LC) and heavy chain (HC) respectively. Potential DOTA modification sites of the chelator were identified in lysine residues. The biological activity of the conjugates was evaluated by flow cytometry (FACS) using CAIX negative (SKRC-18) and CAIX positive (SKRC-52). The DOTA-G250 conjugates were labelled with 177Lu with a radiochemical yield > 95% reaching specific activities of 12 MBq/µg. The stability in vitro of different types of radioconstructs was analyzed in human serum albumin (HSA). The radiostability of 177Lu-DOTA-G250 at high specific activity was increased by addition of sodium ascorbate after the labelling. The immunoreactivity was evaluated in vitro and in vivo. Binding to CAIX positive cells (SK-RC-52) at different specific activities was higher for conjugates with less DOTA content. Protein dose was optimized in mice with subcutaneously growing SK-RC-52 tumors using different amounts of 177Lu- DOTA-G250.Keywords: mass spectrometry, monoclonal antibody, radiopharmaceuticals, radioimmunotheray, renal cancer
Procedia PDF Downloads 307403 Phospholipid Cationic and Zwitterionic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Microalgae
Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres
Abstract:
Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro-fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect mammalian eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge or a zwitterionic polar-head group to prevent microfouling with marine bacteria. Toxicity of these compounds was also studied in order to identify the most promising compounds that inhibit biofilm development and show low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.Keywords: amphiphilic phospholipids, biofilm, marine fouling, non-toxique assays
Procedia PDF Downloads 134402 Piezotronic Effect on Electrical Characteristics of Zinc Oxide Varistors
Authors: Nadine Raidl, Benjamin Kaufmann, Michael Hofstätter, Peter Supancic
Abstract:
If polycrystalline ZnO is properly doped and sintered under very specific conditions, it shows unique electrical properties, which are indispensable for today’s electronic industries, where it is used as the number one overvoltage protection material. Under a critical voltage, the polycrystalline bulk exhibits high electrical resistance but becomes suddenly up to twelve magnitudes more conductive if this voltage limit is exceeded (i.e., varistor effect). It is known that these peerless properties have their origin in the grain boundaries of the material. Electric charge is accumulated in the boundaries, causing a depletion layer in their vicinity and forming potential barriers (so-called Double Schottky Barriers, or DSB) which are responsible for the highly non-linear conductivity. Since ZnO is a piezoelectric material, mechanical stresses induce polarisation charges that modify the DSB heights and as a result the global electrical characteristics (i.e., piezotronic effect). In this work, a finite element method was used to simulate emerging stresses on individual grains in the bulk. Besides, experimental efforts were made to testify a coherent model that could explain this influence. Electron back scattering diffraction was used to identify grain orientations. With the help of wet chemical etching, grain polarization was determined. Micro lock-in infrared thermography (MLIRT) was applied to detect current paths through the material, and a micro 4-point probes method system (M4PPS) was employed to investigate current-voltage characteristics between single grains. Bulk samples were tested under uniaxial pressure. It was found that the conductivity can increase by up to three orders of magnitude with increasing stress. Through in-situ MLIRT, it could be shown that this effect is caused by the activation of additional current paths in the material. Further, compressive tests were performed on miniaturized samples with grain paths containing solely one or two grain boundaries. The tests evinced both an increase of the conductivity, as observed for the bulk, as well as a decreased conductivity. This phenomenon has been predicted theoretically and can be explained by piezotronically induced surface charges that have an impact on the DSB at the grain boundaries. Depending on grain orientation and stress direction, DSB can be raised or lowered. Also, the experiments revealed that the conductivity within one single specimen can increase and decrease, depending on the current direction. This novel finding indicates the existence of asymmetric Double Schottky Barriers, which was furthermore proved by complementary methods. MLIRT studies showed that the intensity of heat generation within individual current paths is dependent on the direction of the stimulating current. M4PPS was used to study the relationship between the I-V characteristics of single grain boundaries and grain orientation and revealed asymmetric behavior for very specific orientation configurations. A new model for the Double Schottky Barrier, taking into account the natural asymmetry and explaining the experimental results, will be given.Keywords: Asymmetric Double Schottky Barrier, piezotronic, varistor, zinc oxide
Procedia PDF Downloads 267401 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps
Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo
Abstract:
With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.Keywords: interactive applications, power management, QoS, Web apps, WebGL
Procedia PDF Downloads 192400 Informational Habits and Ideology as Predictors for Political Efficacy: A Survey Study of the Brazilian Political Context
Authors: Pedro Cardoso Alves, Ana Lucia Galinkin, José Carlos Ribeiro
Abstract:
Political participation, can be a somewhat tricky subject to define, not in small part due to the constant changes in the concept fruit of the effort to include new forms of participatory behavior that go beyond traditional institutional channels. With the advent of the internet and mobile technologies, defining political participation has become an even more complicated endeavor, given de amplitude of politicized behaviors that are expressed throughout these mediums, be it in the very organization of social movements, in the propagation of politicized texts, videos and images, or in the micropolitical behaviors that are expressed in daily interaction. In fact, the very frontiers that delimit physical and digital spaces have become ever more diluted due to technological advancements, leading to a hybrid existence that is simultaneously physical and digital, not limited, as it once was, to the temporal limitations of classic communications. Moving away from those institutionalized actions of traditional political behavior, an idea of constant and fluid participation, which occurs in our daily lives through conversations, posts, tweets and other digital forms of expression, is discussed. This discussion focuses on the factors that precede more direct forms of political participation, interpreting the relation between informational habits, ideology, and political efficacy. Though some of the informational habits can be considered political participation, by some authors, a distinction is made to establish a logical flow of behaviors leading to participation, that is, one must gather and process information before acting on it. To reach this objective, a quantitative survey is currently being applied in Brazilian social media, evaluating feelings of political efficacy, social and economic issue-based ideological stances and informational habits pertaining to collection, fact-checking, and diversity of sources and ideological positions present in the participant’s political information network. The measure being used for informational habits relies strongly on a mix of information literacy and political sophistication concepts, bringing a more up-to-date understanding of information and knowledge production and processing in contemporary hybrid (physical-digital) environments. Though data is still being collected, preliminary analysis point towards a strong correlation between information habits and political efficacy, while ideology shows a weaker influence over efficacy. Moreover, social ideology and economic ideology seem to have a strong correlation in the sample, such intermingling between social and economic ideals is generally considered a red flag for political polarization.Keywords: political efficacy, ideology, information literacy, cyberpolitics
Procedia PDF Downloads 234399 Embryonic Aneuploidy – Morphokinetic Behaviors as a Potential Diagnostic Biomarker
Authors: Banafsheh Nikmehr, Mohsen Bahrami, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Mallory Pitts, Tolga B. Mesen, Tamer M. Yalcinkaya
Abstract:
The number of people who receive in vitro fertilization (IVF) treatment has increased on a startling trajectory over the past two decades. Despite advances in this field, particularly the introduction of intracytoplasmic sperm injection (ICSI) and the preimplantation genetic screening (PGS), the IVF success remains low. A major factor contributing to IVF failure is embryonic aneuploidy (abnormal chromosome content), which often results in miscarriage and birth defects. Although PGS is often used as the standard diagnostic tool to identify aneuploid embryos, it is an invasive approach that could affect the embryo development, and yet inaccessible to many patients due its high costs. As such, there is a clear need for a non-invasive cost-effective approach to identify euploid embryos for single embryo transfer (SET). The reported differences between morphokinetic behaviors of aneuploid and euploid embryos has shown promise to address this need. However, current literature is inconclusive and further research is urgently needed to translate current findings into clinical diagnostics. In this ongoing study, we found significant differences between morphokinetic behaviors of euploid and aneuploid embryos that provides important insights and reaffirms the promise of such behaviors for developing non-invasive methodologies. Methodology—A total of 242 embryos (euploid: 149, aneuploid: 93) from 74 patients who underwent IVF treatment in Carolinas Fertility Clinics in Winston-Salem, NC, were analyzed. All embryos were incubated in an EmbryoScope incubator. The patients were randomly selected from January 2019 to June 2021 with most patients having both euploid and aneuploid embryos. All embryos reached the blastocyst stage and had known PGS outcomes. The ploidy assessment was done by a third-party testing laboratory on day 5-7 embryo biopsies. The morphokinetic variables of each embryo were measured by the EmbryoViewer software (Uniesense FertiliTech) on time-lapse images using 7 focal depths. We compared the time to: pronuclei fading (tPNf), division to 2,3,…,9 cells (t2, t3,…,t9), start of embryo compaction (tSC), Morula formation (tM), start of blastocyst formation (tSC), blastocyst formation (tB), and blastocyst expansion (tEB), as well as intervals between them (e.g., c23 = t3 – t2). We used a mixed regression method for our statistical analyses to account for the correlation between multiple embryos per patient. Major Findings— The average age of the patients was 35.04 yrs. The average patient age associated with euploid and aneuploid embryos was not different (P = 0.6454). We found a significant difference in c45 = t5-t4 (P = 0.0298). Our results indicated this interval on average lasts significantly longer for aneuploid embryos - c45(aneuploid) = 11.93hr vs c45(euploid) = 7.97hr. In a separate analysis limited to embryos from the same patients (patients = 47, total embryos=200, euploid=112, aneuploid=88), we obtained the same results (P = 0.0316). The statistical power for this analysis exceeded 87%. No other variable was different between the two groups. Conclusion— Our results demonstrate the importance of morphokinetic variables as potential biomarkers that could aid in non-invasively characterizing euploid and aneuploid embryos. We seek to study a larger population of embryos and incorporate the embryo quality in future studies.Keywords: IVF, embryo, euploidy, aneuploidy, morphokinteic
Procedia PDF Downloads 88398 Training Hearing Parents in SmiLE Therapy Supports the Maintenance and Generalisation of Deaf Children's Social Communication Skills
Authors: Martina Curtin, Rosalind Herman
Abstract:
Background: Deaf children can experience difficulties with understanding how social interaction works, particularly when communicating with unfamiliar hearing people. Deaf children often struggle with integrating into a mainstream, hearing environments. These negative experiences can lead to social isolation, depression and other mental health difficulties later in life. smiLE Therapy (Schamroth, 2015) is a video-based social communication intervention that aims to teach deaf children skills to confidently communicate with unfamiliar hearing people. Although two previous studies have reported improvements in communication skills immediately post intervention, evidence for maintenance of gains or generalisation of skills (i.e., the transfer of newly learnt skills to untrained situations) has not to date been demonstrated. Parental involvement has been shown to support deaf children’s therapy outcomes. Therefore, this study added parent training to the therapy children received to investigate the benefits to generalisation of children’s skills. Parents were also invited to present their perspective on the training they received. Aims: (1) To assess pupils’ progress from pre- to post-intervention in trained and untrained tasks, (2) to investigate if training parents improved their (a) understanding of their child’s needs and (b) their skills in supporting their child appropriately in smiLE Therapy tasks, (3) to assess if parent training had an impact on the pupil’s ability to (a) maintain their skills in trained tasks post-therapy, and (b) generalise their skills in untrained, community tasks. Methods: This was a mixed-methods, repeated measures study. 31 deaf pupils (aged between 7 and 14) received an hour of smiLE Therapy per week, for 6 weeks. Communication skills were assessed pre-, post- and 3-months post-intervention using the Communication Skills Checklist. Parents were then invited to attend two training sessions and asked to bring a video of their child communicating in a shop or café. These videos were used to assess whether, after parent training, the child was able to generalise their skills to a new situation. Finally, parents attended a focus group to discuss the effectiveness of the therapy, particularly the wider impact, i.e., more child participation within the hearing community. Results: All children significantly improved their scores following smiLE therapy and maintained these skills to high level. Children generalised a high percentage of their newly learnt skills to an untrained situation. Parents reported improved understanding of their child’s needs, their child’s potential and in how to support them in real-life situations. Parents observed that their children were more confident and independent when carrying out communication tasks with unfamiliar hearing people. Parents realised they needed to ‘let go’ and embrace their child’s independence and provide more opportunities for them to participate in their community. Conclusions: This study adds to the evidence base on smiLE Therapy; it is an effective intervention that develops deaf children’s ability to interact competently with unfamiliar, hearing, communication partners. It also provides preliminary evidence of the benefits of parent training in helping children to generalise their skills to other situations. These findings will be of value to therapists wishing to develop deaf children’s communication skills beyond the therapy setting.Keywords: deaf children, generalisation, parent involvement, social communication
Procedia PDF Downloads 139