Search results for: quality parameters
294 Non-Invasive Evaluation of Patients After Percutaneous Coronary Revascularization. The Role of Cardiac Imaging
Authors: Abdou Elhendy
Abstract:
Numerous study have shown the efficacy of the percutaneous intervention (PCI) and coronary stenting in improving left ventricular function and relieving exertional angina. Furthermore, PCI remains the main line of therapy in acute myocardial infarction. Improvement of procedural techniques and new devices have resulted in an increased number of PCI in those with difficult and extensive lesions, multivessel disease as well as total occlusion. Immediate and late outcome may be compromised by acute thrombosis or the development of fibro-intimal hyperplasia. In addition, progression of coronary artery disease proximal or distal to the stent as well as in non-stented arteries is not uncommon. As a result, complications can occur, such as acute myocardial infarction, worsened heart failure or recurrence of angina. In a stent, restenosis can occur without symptoms or with atypical complaints rendering the clinical diagnosis difficult. Routine invasive angiography is not appropriate as a follow up tool due to associated risk and cost and the limited functional assessment. Exercise and pharmacologic stress testing are increasingly used to evaluate the myocardial function, perfusion and adequacy of revascularization. Information obtained by these techniques provide important clues regarding presence and severity of compromise in myocardial blood flow. Stress echocardiography can be performed in conjunction with exercise or dobutamine infusion. The diagnostic accuracy has been moderate, but the results provide excellent prognostic stratification. Adding myocardial contrast agents can improve imaging quality and allows assessment of both function and perfusion. Stress radionuclide myocardial perfusion imaging is an alternative to evaluate these patients. The extent and severity of wall motion and perfusion abnormalities observed during exercise or pharmacologic stress are predictors of survival and risk of cardiac events. According to current guidelines, stress echocardiography and radionuclide imaging are considered to have appropriate indication among patients after PCI who have cardiac symptoms and those who underwent incomplete revascularization. Stress testing is not recommended in asymptomatic patients, particularly early after revascularization, Coronary CT angiography is increasingly used and provides high sensitive for the diagnosis of coronary artery stenosis. Average sensitivity and specificity for the diagnosis of in stent stenosis in pooled data are 79% and 81%, respectively. Limitations include blooming artifacts and low feasibility in patients with small stents or thick struts. Anatomical and functional cardiac imaging modalities are corner stone for the assessment of patients after PCI and provide salient diagnostic and prognostic information. Current imaging techniques cans serve as gate keeper for coronary angiography, thus limiting the risk of invasive procedures to those who are likely to benefit from subsequent revascularization. The determination of which modality to apply requires careful identification of merits and limitation of each technique as well as the unique characteristic of each individual patient.Keywords: coronary artery disease, stress testing, cardiac imaging, restenosis
Procedia PDF Downloads 168293 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 106292 Modeling the Impact of Time Pressure on Activity-Travel Rescheduling Heuristics
Authors: Jingsi Li, Neil S. Ferguson
Abstract:
Time pressure could have an influence on the productivity, quality of decision making, and the efficiency of problem-solving. This has been mostly stemmed from cognitive research or psychological literature. However, a salient scarce discussion has been held for transport adjacent fields. It is conceivable that in many activity-travel contexts, time pressure is a potentially important factor since an excessive amount of decision time may incur the risk of late arrival to the next activity. The activity-travel rescheduling behavior is commonly explained by costs and benefits of factors such as activity engagements, personal intentions, social requirements, etc. This paper hypothesizes that an additional factor of perceived time pressure could affect travelers’ rescheduling behavior, thus leading to an impact on travel demand management. Time pressure may arise from different ways and is assumed here to be essentially incurred due to travelers planning their schedules without an expectation of unforeseen elements, e.g., transport disruption. In addition to a linear-additive utility-maximization model, the less computationally compensatory heuristic models are considered as an alternative to simulate travelers’ responses. The paper will contribute to travel behavior modeling research by investigating the following questions: how to measure the time pressure properly in an activity-travel day plan context? How do travelers reschedule their plans to cope with the time pressure? How would the importance of the activity affect travelers’ rescheduling behavior? What will the behavioral model be identified to describe the process of making activity-travel rescheduling decisions? How do these identified coping strategies affect the transport network? In this paper, a Mixed Heuristic Model (MHM) is employed to identify the presence of different choice heuristics through a latent class approach. The data about travelers’ activity-travel rescheduling behavior is collected via a web-based interactive survey where a fictitious scenario is created comprising multiple uncertain events on the activity or travel. The experiments are conducted in order to gain a real picture of activity-travel reschedule, considering the factor of time pressure. The identified behavioral models are then integrated into a multi-agent transport simulation model to investigate the effect of the rescheduling strategy on the transport network. The results show that an increased proportion of travelers use simpler, non-compensatory choice strategies instead of compensatory methods to cope with time pressure. Specifically, satisfying - one of the heuristic decision-making strategies - is adopted commonly since travelers tend to abandon the less important activities and keep the important ones. Furthermore, the importance of the activity is found to increase the weight of negative information when making trip-related decisions, especially route choices. When incorporating the identified non-compensatory decision-making heuristic models into the agent-based transport model, the simulation results imply that neglecting the effect of perceived time pressure may result in an inaccurate forecast of choice probability and overestimate the affectability to the policy changes.Keywords: activity-travel rescheduling, decision making under uncertainty, mixed heuristic model, perceived time pressure, travel demand management
Procedia PDF Downloads 112291 The Evaluation of Child Maltreatment Severity and the Decision-Making Processes in the Child Protection System
Authors: Maria M. Calheiros, Carla Silva, Eunice Magalhães
Abstract:
Professionals working in child protection services (CPS) need to have common and clear criteria to identify cases of maltreatment and to differentiate levels of severity in order to determine when CPS intervention is required, its nature and urgency, and, in most countries, the service that will be in charge of the case (community or specialized CPS). Actually, decision-making process is complex in CPS, and, for that reason, such criteria are particularly important for who significantly contribute to that decision-making in child maltreatment cases. The main objective of this presentation is to describe the Maltreatment Severity Assessment Questionnaire (MSQ), specifically designed to be used by professionals in the CPS, which adopts a multidimensional approach and uses a scale of severity within subtypes. Specifically, we aim to provide evidence of validity and reliability of this tool, in order to improve the quality and validity of assessment processes and, consequently, the decision making in CPS. The total sample was composed of 1000 children and/or adolescents (51.1% boys), aged between 0 and 18 years old (M = 9.47; DP = 4.51). All the participants were referred to official institutions of the children and youth protective system. Children and adolescents maltreatment (abuse, neglect experiences and sexual abuse) were assessed with 21 items of the Maltreatment Severity Questionnaire (MSQ), by professionals of CPS. Each item (sub-type) was composed of four descriptors of increasing severity. Professionals rated the level of severity, using a 4-point scale (1= minimally severe; 2= moderately severe; 3= highly severe; 4= extremely severe). The construct validity of the Maltreatment Severity Questionnaire was assessed with a holdout method, performing an Exploratory Factor Analysis (EFA) followed by a Confirmatory Factor Analysis (CFA). The final solution comprised 18 items organized in three factors 47.3% of variance explained. ‘Physical neglect’ (eight items) was defined by parental omissions concerning the insurance and monitoring of the child’s physical well-being and health, namely in terms of clothing, hygiene, housing conditions and contextual environmental security. ‘Physical and Psychological Abuse’ (four items) described abusive physical and psychological actions, namely, coercive/punitive disciplinary methods, physically violent methods or verbal interactions that offend and denigrate the child, with the potential to disrupt psychological attributes (e.g., self-esteem). ‘Psychological neglect’ (six items) involved omissions related to children emotional development, mental health monitoring, school attendance, development needs, as well as inappropriate relationship patterns with attachment figures. Results indicated a good reliability of all the factors. The assessment of child maltreatment cases with MSQ could have a set of practical and research implications: a) It is a valid and reliable multidimensional instrument to measure child maltreatment, b) It is an instrument integrating the co-occurrence of various types of maltreatment and a within-subtypes scale of severity; c) Specifically designed for professionals, it may assist them in decision-making processes; d) More than using case file reports to evaluate maltreatment experiences, researchers could guide more appropriately their research about determinants and consequences of maltreatment.Keywords: assessment, maltreatment, children and youth, decision-making
Procedia PDF Downloads 290290 The Perceptions of Patients with Osteoarthritis at a Public Community Rehabilitation Centre in the Cape Metropole for Using Digital Technology in Rehabilitation
Authors: Gabriela Prins, Quinette Louw, Dawn Ernstzen
Abstract:
Background: Access to rehabilitation services is a major challenge globally, especially in low-and-middle income countries (LMICs) where resources and infrastructure are extremely limited. Telerehabilitation (TR) has emerged in recent decades as a highly promising method to dramatically expand accessibility to rehabilitation services globally. TR provides rehabilitation care remotely using communication technologies such as video conferencing, smartphones, and internet-connected devices. This boosts accessibility to underprivileged regions and allows for greater flexibility for patients. Despite this, TR is hindered by several factors, including limited technological resources, high costs, lack of digital access, and the unavailability of healthcare systems, which are major barriers to widespread adoption among LMIC patients. These barriers have collectively hindered the potential implementation and adoption of TR services in LMICs healthcare settings. Adoption of TR will also require the buy-in of end users and limited information is known on the perspectives of the SA population. Aim: The study aimed to understand patients' perspectives regarding the use of digital technology as part of their OA rehabilitation at a public community healthcare centre in the Cape Metropole Area. Methods: A qualitative descriptive study design was used on 10 OA patients from a public community rehabilitation centre in South Africa. Data collection included semi-structured interviews and patient-reported outcome measures (PSFS, ASES-8, and EuroQol EQ-5D-5L) on functioning and quality of life. Transcribed interview data were coded in Atlas.ti. 22.2 and analyzed using thematic analysis. The results were narratively documented. Results: Four themes arose from the interviews. The themes were Telerehabilitation awareness (Use of Digital Technology Information Sources and Prior Experience with Technology /TR), Telerehabilitation Benefits (Access to healthcare providers, Access to educational information, Convenience, Time and Resource Efficiency and Facilitating Family Involvement), Telerehabilitation Implementation Considerations (Openness towards TR Implementation, Learning about TR and Technology, Therapeutic relationship, and Privacy) and Future use of Telerehabilitation (Personal Preference and TR for the next generation). The ten participants demonstrated limited awareness and exposure to TR, as well as minimal digital literacy and skills. Skepticism was shown when comparing the effectiveness of TR to in-person rehabilitation and valued physical interactions with health professionals. However, some recognized potential benefits of TR for accessibility, convenience, family involvement and improving community health in the long term. Willingness existed to try TR with sufficient training. Conclusion: With targeted efforts addressing identified barriers around awareness, technological literacy, clinician readiness and resource availability, perspectives on TR may shift positively from uncertainty towards endorsement of this expanding approach for simpler rehabilitation access in LMICs.Keywords: digital technology, osteoarthritis, primary health care, telerehabilitation
Procedia PDF Downloads 77289 From Over-Tourism to Over-Mobility: Understanting the Mobility of Incoming City Users in Barcelona
Authors: José Antonio Donaire Benito, Konstantina Zerva
Abstract:
Historically, cities have been places where people from many nations and cultures have met and settled together, while population flows and density have had a significant impact on urban dynamics. Cities' high density of social, cultural, business offerings, everyday services, and other amenities not intended for tourists draw not only tourists but a wide range of city users as well. With the coordination of city rhythms and the porosity of the community, city users order and frame their urban experience. From one side, recent literature focuses on the shift in urban tourist experience from 'having' a holiday through 'doing' activities to 'becoming' a local by experiencing a part of daily life. On the other hand, there is a debate on the 'touristification of everyday life', where middle and upper class urban dwellers display attitudes and behaviors that are virtually undistinguishable from those of visitors. With the advent of globalization and technological advances, modern society has undergone a radical transformation that has altered mobility patterns within it, blurring the boundaries between tourism and everyday life, work and leisure, and "hosts" and "guests". Additionally, the presence of other 'temporary city' users, such as commuters, digital nomads, second home owners, and migrants, contributes to a more complex transformation of tourist cities. Moving away from this traditional clear distinction between 'hosts' and 'guests', which represents a more static view of tourism, and moving towards a more liquid narrative of mobility, academics on tourism development are embracing the New Mobilities Paradigm. The latter moves beyond the static structures of the modern world and focuses on the ways in which social entities are made up of people, machines, information, and images in a moving system. In light of this fluid interdependence between tourists and guests, a question arises as to whether overtourism, which is considered as the underlying cause of citizens' perception of a lower urban quality of life, is a fair representation of perceived mobility excessiveness, place consumption disruptiveness, and residents displacement. As a representative example of an overtourism narrative, Barcelona was chosen as a study area for this purpose, focusing on the incoming city users to reflect in depth the variety of people who contribute to mobility flows beyond those residents already have. Several statistical data have been analyzed to determine the number of national and international visitors to Barcelona at some point during the day in 2019. Specifically, tracking data gathered from mobile phone users within the city are combined with tourist surveys, urban mobility data, zenithal data capture, and information about the city's attractions. The paper shows that tourists are only a small part of the different incoming city users that daily enter Barcelona; excursionists, commuters, and metropolitans also contribute to a high mobility flow. Based on the diversity of incoming city users and their place consumption, it seems that the city's urban experience is more likely to be impacted by over-mobility tan over-tourism.Keywords: city users, density, new mobilities paradigm, over-tourism.
Procedia PDF Downloads 79288 'iTheory': Mobile Way to Music Fundamentals
Authors: Marina Karaseva
Abstract:
The beginning of our century became a new digital epoch in the educational situation. Last decade the newest stage of this process had been initialized by the touch-screen mobile devices with program applications for them. The touch possibilities for learning fundamentals of music are of especially importance for music majors. The phenomenon of touching, firstly, makes it realistic to play on the screen as on music instrument, secondly, helps students to learn music theory while listening in its sound elements by music ear. Nowadays we can detect several levels of such mobile applications: from the basic ones devoting to the elementary music training such as intervals and chords recognition, to the more advanced applications which deal with music perception of non-major and minor modes, ethnic timbres, and complicated rhythms. The main purpose of the proposed paper is to disclose the main tendencies in this process and to demonstrate the most innovative features of music theory applications on the base of iOS and Android systems as the most common used. Methodological recommendations how to use these digital material musicologically will be done for the professional music education of different levels. These recommendations are based on more than ten year ‘iTheory’ teaching experience of the author. In this paper, we try to logically classify all types of ‘iTheory’mobile applications into several groups, according to their methodological goals. General concepts given below will be demonstrated in concrete examples. The most numerous group of programs is formed with simulators for studying notes with audio-visual links. There are link-pair types as follows: sound — musical notation which may be used as flashcards for studying words and letters, sound — key, sound — string (basically, guitar’s). The second large group of programs is programs-tests containing a game component. As a rule, their basis is made with exercises on ear identification and reconstruction by voice: sounds and intervals on their sounding — harmonical and melodical, music modes, rhythmic patterns, chords, selected instrumental timbres. Some programs are aimed at an establishment of acoustical communications between concepts of the musical theory and their musical embodiments. There are also programs focused on progress of operative musical memory (with repeating of sounding phrases and their transposing in a new pitch), as well as on perfect pitch training In addition a number of programs improvisation skills have been developed. An absolute pitch-system of solmisation is a common base for mobile programs. However, it is possible to find also the programs focused on the relative pitch system of solfegе. In App Store and Google Play Market online store there are also many free programs-simulators of musical instruments — piano, guitars, celesta, violin, organ. These programs may be effective for individual and group exercises in ear training or composition classes. Great variety and good sound quality of these programs give now a unique opportunity to musicians to master their music abilities in a shorter time. That is why such teaching material may be a way to effective study of music theory.Keywords: ear training, innovation in music education, music theory, mobile devices
Procedia PDF Downloads 205287 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling
Authors: Danlei Yang, Luofeng Huang
Abstract:
The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence
Procedia PDF Downloads 8286 Determinants of Child Nutritional Inequalities in Pakistan: Regression-Based Decomposition Analysis
Authors: Nilam Bano, Uzma Iram
Abstract:
Globally, the dilemma of undernutrition has become a notable concern for the researchers, academicians, and policymakers because of its severe consequences for many centuries. The nutritional deficiencies create hurdles for the people to achieve goals related to live a better lifestyle. Not only at micro level but also at the macro level, the consequences of undernutrition affect the economic progress of the country. The initial five years of a child’s life are considered critical for the physical growth and brain development. In this regard, children require special care and good quality food (nutrient intake) to fulfill their nutritional demand of the growing body. Having the sensitive stature and health, children specially under the age of 5 years are more vulnerable to the poor economic, housing, environmental and other social conditions. Beside confronting economic challenges and political upheavals, Pakistan is also going through from a rough patch in the context of social development. Majority of the children are facing serious health problems in the absence of required nutrition. The complexity of this issue is getting severe day by day and specially children are left behind with different type of immune problems and vitamins and mineral deficiencies. It is noted that children from the well-off background are less likely affected by the undernutrition. In order to underline this issue, the present study aims to highlight the existing nutritional inequalities among the children of under five years in Pakistan. Moreover, this study strives to decompose those factors that severely affect the existing nutritional inequality and standing in the queue to capture the consideration of concerned authorities. Pakistan Demographic and Health Survey 2012-13 was employed to assess the relevant indicators of undernutrition such as stunting, wasting, underweight and associated socioeconomic factors. The objectives were executed through the utilization of the relevant empirical techniques. Concentration indices were constructed to measure the nutritional inequalities by utilizing three measures of undernutrition; stunting, wasting and underweight. In addition to it, the decomposition analysis following the logistic regression was made to unfold the determinants that severely affect the nutritional inequalities. The negative values of concentration indices illustrate that children from the marginalized background are affected by the undernutrition more than their counterparts who belong from rich households. Furthermore, the result of decomposition analysis indicates that child age, size of a child at birth, wealth index, household size, parents’ education, mother’s health and place of residence are the most contributing factors in the prevalence of existing nutritional inequalities. Considering the result of the study, it is suggested to the policymakers to design policies in a way so that the health sector of Pakistan can stimulate in a productive manner. Increasing the number of effective health awareness programs for mothers would create a notable difference. Moreover, the education of the parents must be concerned by the policymakers as it has a significant association with the present research in terms of eradicating the nutritional inequalities among children.Keywords: concentration index, decomposition analysis, inequalities, undernutrition, Pakistan
Procedia PDF Downloads 132285 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy
Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini
Abstract:
The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering
Procedia PDF Downloads 223284 A Cross-Sectional Study Assessing Communication Practices among Doctors at a University Hospital in Pakistan
Authors: Muhammad Waqas Baqai, Noman Shahzad, Rehman Alvi
Abstract:
Communication among health care givers is the essence of quality patient care and any compromise results in errors and inefficiency leading to cumbersome outcomes. The use of smartphone among health professionals has increased tremendously. Almost every health professional carries it and majority of them uses a third party communication software called whatsApp for work related communications. It gives instant access to the person responsible for any particular query and therefore helps in efficient and timely decision making. It is also an easy way of sharing medical documents, multimedia and provides platform for consensual decision making through group discussions. However clinical communication through whatsApp has some demerits too including reduction in verbal communication, worsening professional relations, unprofessional behavior, risk of confidentiality breach and threats from cyber-attacks. On the other hand the traditional pager device being used in many health care systems is a unidirectional communication that lacks the ability to convey any information other than the number to which the receiver has to respond. Our study focused on these two widely used modalities of communication among doctors of the largest tertiary care center of Pakistan i.e. The Aga Khan University Hospital. Our aim was to note which modality is considered better and has fewer threats to medical data. Approval from ethical review committee of the institute was taken prior to conduction of this study. We submitted an online survey form to all the interns and residents working at our institute and collected their response in a month’s time. 162 submissions were recorded and analyzed using descriptive statistics. Only 20% of them were comfortable with using pagers exclusively, 52% with whatsApp and 28% with both. 65% think that whatsApp is time-saving and quicker than pager. 54% of them considered whatsApp to be causing nuisance from work related notifications in their off-work hours. 60% think that they are more likely to miss information through pager system because of the unidirectional nature. Almost all (96%) of residents and interns found whatsApp to be useful in terms of saving information for future reference. For urgent issues, majority (70%) preferred pager over whatsApp and also pager was considered more valid in terms of hospital policies and legal issues. Among major advantages of whatsApp as listed by them were; easy mass communication, sharing of clinical pictures, universal access and no need of carrying additional device. However the major drawback of using whatsApp for clinical communication that everyone shared was threat to patients’ confidentiality as clinicians usually share pictures of wounds, clinical documents etc. Lastly we asked them if they think there is a need of a separate application for instant communication dedicated to clinical communication only and 90% responded positively. Therefore, we concluded that both modalities have their merits and demerits but the greatest drawback with whatsApp is the risk of breach in patients’ confidentiality and off-work disturbance. Hence, we recommend a more secure, institute-run application for all intra hospital communications where they can share documents, pictures etc. easily under a controlled environment.Keywords: WhatsApp, pager, clinical communication, confidentiality
Procedia PDF Downloads 146283 Experimental and Modelling Performances of a Sustainable Integrated System of Conditioning for Bee-Pollen
Authors: Andrés Durán, Brian Castellanos, Marta Quicazán, Carlos Zuluaga-Domínguez
Abstract:
Bee-pollen is an apicultural-derived food product, with a growing appreciation among consumers given the remarkable nutritional and functional composition, in particular, protein (24%), dietary fiber (15%), phenols (15 – 20 GAE/g) and carotenoids (600 – 900 µg/g). These properties are given by the geographical and climatic characteristics of the region where it is collected. There are several countries recognized by their pollen production, e.g. China, United States, Japan, Spain, among others. Beekeepers use traps in the entrance of the hive where bee-pollen is collected. After the removal of foreign particles and drying, this product is ready to be marketed. However, in countries located along the equator, the absence of seasons and a constant tropical climate throughout the year favors a more rapid spoilage condition for foods with elevated water activity. The climatic conditions also trigger the proliferation of microorganisms and insects. This, added to the factor that beekeepers usually do not have adequate processing systems for bee-pollen, leads to deficiencies in the quality and safety of the product. In contrast, the Andean region of South America, lying on equator, typically has a high production of bee-pollen of up to 36 kg/year/hive, being four times higher than in countries with marked seasons. This region is also located in altitudes superior to 2500 meters above sea level, having extremes sun ultraviolet radiation all year long. As a mechanism of defense of radiation, plants produce more secondary metabolites acting as antioxidant agents, hence, plant products such as bee-pollen contain remarkable more phenolics and carotenoids than collected in other places. Considering this, the improvement of bee-pollen processing facilities by technical modifications and the implementation of an integrated cleaning and drying system for the product in an apiary in the area was proposed. The beehives were modified through the installation of alternative bee-pollen traps to avoid sources of contamination. The processing facility was modified according to considerations of Good Manufacturing Practices, implementing the combined use of a cabin dryer with temperature control and forced airflow and a greenhouse-type solar drying system. Additionally, for the separation of impurities, a cyclone type system was implemented, complementary to a screening equipment. With these modifications, a decrease in the content of impurities and the microbiological load of bee-pollen was seen from the first stages, principally with a reduction of the presence of molds and yeasts and in the number of foreign animal origin impurities. The use of the greenhouse solar dryer integrated to the cabin dryer allowed the processing of larger quantities of product with shorter waiting times in storage, reaching a moisture content of about 6% and a water activity lower than 0.6, being appropriate for the conservation of bee-pollen. Additionally, the contents of functional or nutritional compounds were not affected, even observing an increase of up to 25% in phenols content and a non-significant decrease in carotenoids content and antioxidant activity.Keywords: beekeeping, drying, food processing, food safety
Procedia PDF Downloads 104282 Investigation of Chemical Effects on the Lγ2,3 and Lγ4 X-ray Production Cross Sections for Some Compounds of 66dy at Photon Energies Close to L1 Absorption-edge Energy
Authors: Anil Kumar, Rajnish Kaur, Mateusz Czyzycki, Alessandro Migilori, Andreas Germanos Karydas, Sanjiv Puri
Abstract:
The radiative decay of Li(i=1-3) sub-shell vacancies produced through photoionization results in production of the characteristic emission spectrum comprising several X-ray lines, whereas non-radiative vacancy decay results in Auger electron spectrum. Accurate reliable data on the Li(i=1-3) sub-shell X-ray production (XRP) cross sections is of considerable importance for investigation of atomic inner-shell ionization processes as well as for quantitative elemental analysis of different types of samples employing the energy dispersive X-ray fluorescence (EDXRF) analysis technique. At incident photon energies in vicinity of the absorption edge energies of an element, the many body effects including the electron correlation, core relaxation, inter-channel coupling and post-collision interactions become significant in the photoionization of atomic inner-shells. Further, in case of compounds, the characteristic emission spectrum of the specific element is expected to get influenced by the chemical environment (coordination number, oxidation state, nature of ligand/functional groups attached to central atom, etc.). These chemical effects on L X-ray fluorescence parameters have been investigated by performing the measurements at incident photon energies much higher than the Li(i=1-3) sub-shell absorption edge energies using EDXRF spectrometers. In the present work, the cross sections for production of the Lk(k= γ2,3, γ4) X-rays have been measured for some compounds of 66Dy, namely, Dy2O3, Dy2(CO3)3, Dy2(SO4)3.8H2O, DyI2 and Dy metal by tuning the incident photon energies few eV above the L1 absorption-edge energy in order to investigate the influence of chemical effects on these cross sections in presence of the many body effects which become significant at photon energies close to the absorption-edge energies. The present measurements have been performed under vacuum at the IAEA end-station of the X-ray fluorescence beam line (10.1L) of ELETTRA synchrotron radiation facility (Trieste, Italy) using self-supporting pressed pellet targets (1.3 cm diameter, nominal thicknesses ~ 176 mg/cm2) of 66Dy compounds (procured from Sigma Aldrich) and a metallic foil of 66Dy (nominal thickness ~ 3.9 mg/cm2, procured from Good Fellow, UK). The present measured cross sections have been compared with theoretical values calculated using the Dirac-Hartree-Slater(DHS) model based fluorescence and Coster-Kronig yields, Dirac-Fock(DF) model based X-ray emission rates and two sets of L1 sub-shell photoionization cross sections based on the non-relativistic Hartree-Fock-Slater(HFS) model and those deduced from the self-consistent Dirac-Hartree-Fock(DHF) model based total photoionization cross sections. The present measured XRP cross sections for 66Dy as well as for its compounds for the L2,3 and L4 X-rays, are found to be higher by ~14-36% than the two calculated set values. It is worth to be mentioned that L2,3 and L4 X-ray lines are originated by filling up of the L1 sub-shell vacancies by the outer sub-shell (N2,3 and O2,3) electrons which are much more sensitive to the chemical environment around the central atom. The present observed differences between measured and theoretical values are expected due to combined influence of the many-body effects and the chemical effects.Keywords: chemical effects, L X-ray production cross sections, Many body effects, Synchrotron radiation
Procedia PDF Downloads 132281 An Online Space for Practitioners in the Water, Sanitation and Hygiene Sector
Authors: Olivier Mills, Bernard McDonell, Laura A. S. MacDonald
Abstract:
The increasing availability and quality of internet access throughout the developing world provides an opportunity to utilize online spaces to disseminate water, sanitation and hygiene (WASH) knowledge to practitioners. Since 2001, CAWST has provided in-person education, training and consulting services to thousands of WASH practitioners all over the world, supporting them to start, troubleshoot, improve and expand their WASH projects. As CAWST continues to grow, the organization faces challenges in meeting demand from clients and in providing consistent, timely technical support. In 2012, CAWST began utilizing online spaces to expand its reach by developing a series of resources websites and webinars. CAWST has developed a WASH Education and Training resources website, a Biosand Filter (BSF) Knowledge Base, a Household Water Treatment and Safe Storage Knowledge Base, a mobile app for offline users, a live chat support tool, a WASH e-library, and a series of webinar-style online training sessions to complement its in-person capacity development services. In order to determine the preliminary outcomes of providing these online services, CAWST has monitored and analyzed registration to the online spaces, downloads of the educational materials, and webinar attendance; as well as conducted user surveys. The purpose of this analysis was to find out who was using the online spaces, where users came from, and how the resources were being used. CAWST’s WASH Resources website has served over 5,800 registered users from 3,000 organizations in 183 countries. Additionally, the BSF Knowledge Base has served over 1000 registered users from 68 countries, and over 540 people from 73 countries have attended CAWST’s online training sessions. This indicates that the online spaces are effectively reaching a large numbers of users, from a range of countries. A 2016 survey of the Biosand Filter Knowledge Base showed that approximately 61% of users are practitioners, and 39% are either researchers or students. Of the respondents, 46% reported using the BSF Knowledge Base to initiate a BSF project and 43% reported using the information to train BSF technicians. Finally, 61% indicated they would like even greater support from CAWST’s Technical Advisors going forward. The analysis has provided an encouraging indication that CAWST’s online spaces are contributing to its objective of engaging and supporting WASH practitioners to start, improve and expand their initiatives. CAWST has learned several lessons during the development of these online spaces, in particular related to the resources needed to create and maintain the spaces, and respond to the demand created. CAWST plans to continue expanding its online spaces, improving user experience of the sites, and involving new contributors and content types. Through the use of online spaces, CAWST has been able to increase its global reach and impact without significantly increasing its human resources by connecting WASH practitioners with the information they most need, in a practical and accessible manner. This paper presents on CAWST’s use of online spaces through the CAWST-developed platforms discussed above and the analysis of the use of these platforms.Keywords: education and training, knowledge sharing, online resources, water and sanitation
Procedia PDF Downloads 266280 Smart Services for Easy and Retrofittable Machine Data Collection
Authors: Till Gramberg, Erwin Gross, Christoph Birenbaum
Abstract:
This paper presents the approach of the Easy2IoT research project. Easy2IoT aims to enable companies in the prefabrication sheet metal and sheet metal processing industry to enter the Industrial Internet of Things (IIoT) with a low-threshold and cost-effective approach. It focuses on the development of physical hardware and software to easily capture machine activities from on a sawing machine, benefiting various stakeholders in the SME value chain, including machine operators, tool manufacturers and service providers. The methodological approach of Easy2IoT includes an in-depth requirements analysis and customer interviews with stakeholders along the value chain. Based on these insights, actions, requirements and potential solutions for smart services are derived. The focus is on providing actionable recommendations, competencies and easy integration through no-/low-code applications to facilitate implementation and connectivity within production networks. At the core of the project is a novel, non-invasive measurement and analysis system that can be easily deployed and made IIoT-ready. This system collects machine data without interfering with the machines themselves. It does this by non-invasively measuring the tension on a sawing machine. The collected data is then connected and analyzed using artificial intelligence (AI) to provide smart services through a platform-based application. Three Smart Services are being developed within Easy2IoT to provide immediate benefits to users: Wear part and product material condition monitoring and predictive maintenance for sawing processes. The non-invasive measurement system enables the monitoring of tool wear, such as saw blades, and the quality of consumables and materials. Service providers and machine operators can use this data to optimize maintenance and reduce downtime and material waste. Optimize Overall Equipment Effectiveness (OEE) by monitoring machine activity. The non-invasive system tracks machining times, setup times and downtime to identify opportunities for OEE improvement and reduce unplanned machine downtime. Estimate CO2 emissions for connected machines. CO2 emissions are calculated for the entire life of the machine and for individual production steps based on captured power consumption data. This information supports energy management and product development decisions. The key to Easy2IoT is its modular and easy-to-use design. The non-invasive measurement system is universally applicable and does not require specialized knowledge to install. The platform application allows easy integration of various smart services and provides a self-service portal for activation and management. Innovative business models will also be developed to promote the sustainable use of the collected machine activity data. The project addresses the digitalization gap between large enterprises and SME. Easy2IoT provides SME with a concrete toolkit for IIoT adoption, facilitating the digital transformation of smaller companies, e.g. through retrofitting of existing machines.Keywords: smart services, IIoT, IIoT-platform, industrie 4.0, big data
Procedia PDF Downloads 73279 The Recorded Interaction Task: A Validation Study of a New Observational Tool to Assess Mother-Infant Bonding
Authors: Hannah Edwards, Femke T. A. Buisman-Pijlman, Adrian Esterman, Craig Phillips, Sandra Orgeig, Andrea Gordon
Abstract:
Mother-infant bonding is a term which refers to the early emotional connectedness between a mother and her infant. Strong mother-infant bonding promotes higher quality mother and infant interactions including prolonged breastfeeding, secure attachment and increased sensitive parenting and maternal responsiveness. Strengthening of all such interactions leads to improved social behavior, and emotional and cognitive development throughout childhood, adolescence and adulthood. The positive outcomes observed following strong mother-infant bonding emphasize the need to screen new mothers for disrupted mother-infant bonding, and in turn the need for a robust, valid tool to assess mother-infant bonding. A recent scoping review conducted by the research team identified four tools to assess mother-infant bonding, all of which employed self-rating scales. Thus, whilst these tools demonstrated both adequate validity and reliability, they rely on self-reported information from the mother. As such this may reflect a mother’s perception of bonding with their infant, rather than their actual behavior. Therefore, a new tool to assess mother-infant bonding has been developed. The Recorded Interaction Task (RIT) addresses shortcomings of previous tools by employing observational methods to assess bonding. The RIT focusses on the common interaction between mother and infant of changing a nappy, at the target age of 2-6 months, which is visually recorded and then later assessed. Thirteen maternal and seven infant behaviors are scored on the RIT Observation Scoring Sheet, and a final combined score of mother-infant bonding is determined. The aim of the current study was to assess the content validity and inter-rater reliability of the RIT. A panel of six experts with specialized expertise in bonding and infant behavior were consulted. Experts were provided with the RIT Observation Scoring Sheet, a visual recording of a nappy change interaction, and a feedback form. Experts scored the mother and infant interaction on the RIT Observation Scoring Sheet and completed the feedback form which collected their opinions on the validity of each item on the RIT Observation Scoring Sheet and the RIT as a whole. Twelve of the 20 items on the RIT Observation Scoring Sheet were scored ‘Valid’ by all (n=6) or most (n=5) experts. Two items received a ‘Not valid’ score from one expert. The remainder of the items received a mixture of ‘Valid’ and ‘Potentially Valid’ scores. Few changes were made to the RIT Observation Scoring Sheet following expert feedback, including rewording of items for clarity and the exclusion of an item focusing on behavior deemed not relevant for the target infant age. The overall ICC for single rater absolute agreement was 0.48 (95% CI 0.28 – 0.71). Experts (n=6) ratings were less consistent for infant behavior (ICC 0.27 (-0.01 – 0.82)) compared to mother behavior (ICC 0.55 (0.28 – 0.80)). Whilst previous tools employ self-report methods to assess mother-infant bonding, the RIT utilizes observational methods. The current study highlights adequate content validity and moderate inter-rater reliability of the RIT, supporting its use in future research. A convergent validity study comparing the RIT against an existing tool is currently being undertaken to confirm these results.Keywords: content validity, inter-rater reliability, mother-infant bonding, observational tool, recorded interaction task
Procedia PDF Downloads 181278 Identifying Risk Factors for Readmission Using Decision Tree Analysis
Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka
Abstract:
This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.Keywords: decision tree, hospital, internal medicine, readmission
Procedia PDF Downloads 256277 Aeroelastic Stability Analysis in Turbomachinery Using Reduced Order Aeroelastic Model Tool
Authors: Chandra Shekhar Prasad, Ludek Pesek Prasad
Abstract:
In the present day fan blade of aero engine, turboprop propellers, gas turbine or steam turbine low-pressure blades are getting bigger, lighter and thus, become more flexible. Therefore, flutter, forced blade response and vibration related failure of the high aspect ratio blade are of main concern for the designers, thus need to be address properly in order to achieve successful component design. At the preliminary design stage large number of design iteration is need to achieve the utter free safe design. Most of the numerical method used for aeroelastic analysis is based on field-based methods such as finite difference method, finite element method, finite volume method or coupled. These numerical schemes are used to solve the coupled fluid Flow-Structural equation based on full Naiver-Stokes (NS) along with structural mechanics’ equations. These type of schemes provides very accurate results if modeled properly, however, they are computationally very expensive and need large computing recourse along with good personal expertise. Therefore, it is not the first choice for aeroelastic analysis during preliminary design phase. A reduced order aeroelastic model (ROAM) with acceptable accuracy and fast execution is more demanded at this stage. Similar ROAM are being used by other researchers for aeroelastic and force response analysis of turbomachinery. In the present paper new medium fidelity ROAM is successfully developed and implemented in numerical tool to simulated the aeroelastic stability phenomena in turbomachinery and well as flexible wings. In the present, a hybrid flow solver based on 3D viscous-inviscid coupled 3D panel method (PM) and 3d discrete vortex particle method (DVM) is developed, viscous parameters are estimated using boundary layer(BL) approach. This method can simulate flow separation and is a good compromise between accuracy and speed compared to CFD. In the second phase of the research work, the flow solver (PM) will be coupled with ROM non-linear beam element method (BEM) based FEM structural solver (with multibody capabilities) to perform the complete aeroelastic simulation of a steam turbine bladed disk, propellers, fan blades, aircraft wing etc. The partitioned based coupling approach is used for fluid-structure interaction (FSI). The numerical results are compared with experimental data for different test cases and for the blade cascade test case, experimental data is obtained from in-house lab experiments at IT CAS. Furthermore, the results from the new aeroelastic model will be compared with classical CFD-CSD based aeroelastic models. The proposed methodology for the aeroelastic stability analysis of gas turbine or steam turbine blades, or propellers or fan blades will provide researchers and engineers a fast, cost-effective and efficient tool for aeroelastic (classical flutter) analysis for different design at preliminary design stage where large numbers of design iteration are required in short time frame.Keywords: aeroelasticity, beam element method (BEM), discrete vortex particle method (DVM), classical flutter, fluid-structure interaction (FSI), panel method, reduce order aeroelastic model (ROAM), turbomachinery, viscous-inviscid coupling
Procedia PDF Downloads 266276 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important
Authors: Eleni Karasavvidou
Abstract:
Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.Keywords: representations, context analysis, reviews, sexist stereotypes
Procedia PDF Downloads 84275 Isolation and Probiotic Characterization of Lactobacillus plantarum and Lactococcus lactis from Gut Microbiome of Rohu (Labeo rohita)
Authors: Prem Kumar, Anuj Tyagi, Harsh Panwar, Vaneet Inder Kaur
Abstract:
Though aquaculture started as an occupation for poor and weak farmers for livelihood, it has now acquired the shape of one of the biggest industry to grow live protein in the form of aquatic organisms. Industrialization of the aquaculture sector has led to intensification resulting in stress on aquatic organisms and frequent disease outbreaks leading to huge economic impacts. Indiscriminate use of antibiotics as growth promoter and prophylactic agent in aquaculture has resulted in rapid emergence and spread of antibiotic resistance in bacterial pathogens. Over the past few years, use of probiotics (as an alternative of antibiotics) in aquaculture has gained attention due to their immunostimulant and growth promoting properties. It has now well known that after administration, a probiotic bacterium has to compete and establish itself against native microbiota to show its eventual beneficial properties. Due to their non-fish origin, commercial probiotics sometimes may display poor probiotic functionalities and antagonistic effects. Thus, isolation and characterization of probiotic bacteria from same fish host is very much necessary. In this study, attempts were made to isolate potent probiotic lactic acid bacteria (LAB) from intestinal microflora of rohu fish. Twenty-five experimental rohu fishes (mean weight 400 ± 20gm, mean standard length 20 ± 3cm) were used in the study to collect fish gut after dissection in a sterile condition. A total of 150 tentative LAB isolates from selective agar media (de Man-Rogosa-Sharpe (MRS)) were screened for their antimicrobial activity against Aeromonas hydrophila and Microccocus leuteus. A total of 17 isolates, identified as Lactobacillus plantarum and Lactococcus lactis, identified by biochemical tests and PCR amplification and sequencing of 16S rRNA gene fragment, displayed promising antimicrobial activity against both the pathogens. Two isolates from each species (FLB1, FLB2 from L. plantarum; and FLC1, FLC2 from L. lactis) were subjected to downstream probiotic potential characterization. These isolates were compared in vitro for their hemolytic activity, acid and bile tolerance for growth kinetics, auto-aggregation, cell-surface hydrophobicity against xylene, and chloroform, tolerance to phenol, cell adhesion, and safety parameters (by intraperitoneal and intramuscular injections). None of the tested isolates showed any hemolytic activity indicating their potential safety. Moreover, these isolates were tolerant to 0.3% bile (75-82% survival), phenol stress (96-99% survival) with 100% viability at pH 3 over a period of 3 h. Antibiotic sensitivity test revealed that all the tested LAB isolates were resistant to vancomycin, gentamicin, streptomycin, and erythromycin and sensitive to Erythromycin, Chloramphenicol, Ampicillin, Trimethoprim, and Nitrofurantoin. Tetracycline resistance was found in L. plantarum (FLB1 and FLB2 isolates), whereas L. lactis were susceptible to it. Intramuscular and intraperitoneal challenges to fingerlings of rohu fish (5 ± 1gm weight) with FLB1 showed no pathogenicity and occurrence of disease symptoms in fishes over an observation period of 7 days. The results revealed FLB1 as a potential probiotic candidate for aquaculture application among other isolates.Keywords: aquaculture, Lactobacillus plantarum, Lactococcus lactis, probiotics
Procedia PDF Downloads 136274 The Effectiveness of Prenatal Breastfeeding Education on Breastfeeding Uptake Postpartum: A Systematic Review
Authors: Jennifer Kehinde, Claire O’Donnell, Annmarie Grealish
Abstract:
Introduction: Breastfeeding has been shown to provide numerous health benefits for both infants and mothers. The decision to breastfeed is influenced by physiological, psychological, and emotional factors. However, the importance of equipping mothers with the necessary knowledge for successful breastfeeding practice cannot be ruled out. The decline in global breastfeeding rate can be linked to a lack of adequate breastfeeding education during the prenatal stage. This systematic review examined the effectiveness of prenatal breastfeeding education on breastfeeding uptake postpartum. Method: This review was undertaken and reported in conformity with the Preferred Reporting Items for Systemic Reviews and Meta-Analysis statement (PRISMA) and was registered on the international prospective register for systematic reviews (PROSPERO: CRD42020213853). A PICO analysis (population, intervention, comparison, outcome) was undertaken to inform the choice of keywords in the search strategy to formulate the review question, which was aimed at determining the effectiveness of prenatal breastfeeding educational programs in improving breastfeeding uptake following birth. A systematic search of five databases (Cumulative Index to Nursing and Allied Health Literature, Medline, Psych INFO, and Applied Social Sciences Index and Abstracts) was searched between January 2014 until July 2021 to identify eligible studies. Quality assessment and narrative synthesis were subsequently undertaken. Results: Fourteen studies were included. All 14 studies used different types of breastfeeding programs; eight used a combination of curriculum-based breastfeeding education programs, group prenatal breastfeeding counselling, and one-to-one breastfeeding educational programs, which were all delivered in person; four studies used web-based learning platforms to deliver breastfeeding education prenatally which were both delivered online and face to face over a period of 3 weeks to 2 months with follow-up periods ranging from 3 weeks to 6 months; one study delivered breastfeeding educational intervention using mother-to-mother breastfeeding support groups in promoting exclusive breastfeeding, and one study disseminated breastfeeding education to participants based on the theory of planned behaviour. The most effective interventions were those that included both theory and hands-on demonstrations. Results showed an increase in breastfeeding uptake, breastfeeding knowledge, an increase in a positive attitude to breastfeeding, and an increase in maternal breastfeeding self-efficacy among mothers who participated in breastfeeding educational programs during prenatal care. Conclusion: Prenatal breastfeeding education increases women’s knowledge of breastfeeding. Mothers who are knowledgeable about breastfeeding and hold a positive approach towards breastfeeding have the tendency to initiate breastfeeding and continue for a lengthened period. Findings demonstrate a general correlation between prenatal breastfeeding education and increased breastfeeding uptake postpartum. The high level of positive breastfeeding outcomes inherent in all the studies can be attributed to prenatal breastfeeding education. This review provides rigorous contemporary evidence that healthcare professionals and policymakers can apply when developing effective strategies to improve breastfeeding rates and ultimately improve the health outcomes of mothers and infants.Keywords: breastfeeding, breastfeeding programs, breastfeeding self-efficacy, prenatal breastfeeding education
Procedia PDF Downloads 84273 Where do Pregnant Women Miss Out on Nutrition? Analysis of Survey Data from 22 Countries
Authors: Alexis D'Agostino, Celeste Sununtunasuk, Jack Fiedler
Abstract:
Background: Iron-folic acid (IFA) supplementation during antenatal care (ANC) has existed in many countries for decades. Despite this, low national coverage persists and women do not often consume appropriate amounts during pregnancy. USAID’s SPRING Project investigated pregnant women’s access to, and consumption of, IFA tablets through ANC. Cross-country analysis provided a global picture of the state of IFA-supplementation, while country-specific results noted key contextual issues, including geography, wealth, and ANC attendance. The analysis can help countries prioritize strategies for systematic performance improvements within one of the most common micronutrient supplementation programs aimed at reducing maternal anemia. Methodology: Using falter point analysis on Demographic and Health Survey (DHS) data collected from 162,958 women across 22 countries, SPRING identified four sequential falter points (ANC attendance, IFA receipt or purchase, IFA consumption, and number of tablets taken) where pregnant women fell out of the IFA distribution structure. SPRING analyzed data on IFA intake from DHS surveys with women of reproductive age. SPRING disaggregated these data by ANC participation during the most recent pregnancy, residency, and women’s socio-economic status. Results: Average sufficient IFA tablet use across all countries was only eight percent. Even in the best performing countries, only about one-third of pregnant women consumed 180 or more IFA tablets during their most recent pregnancy. ANC attendance was an important falter point for a quarter of women across all countries (with highest falter rates in Democratic Republic of the Congo, Nigeria, and Niger). Further analysis reveals patterns, with some countries having high ANC coverage but low IFA provision during ANC (DRC and Haiti), others having high ANC coverage and IFA provision but few women taking any tablets (Nigeria and Liberia), and countries that perform well in ANC, supplies, and initial consumption but where very few women consume the recommended 180 tablets (Malawi and Cambodia). Country-level analysis identifies further patterns of supplementation. In Indonesia, for example, only 62% of women in the poorest quintile took even one IFA tablet, while 86% of the wealthiest women did. This association between socioeconomic status and IFA intake held across nearly all countries where these data are available and was also visible in rural/urban comparisons. Analysis of ANC attendance data also suggests that higher numbers of ANC visits are associated with higher tablet intake. Conclusions: While it is difficult to disentangle which specific aspects of supply or demand cause the low rates of consumption, this tool allows policy-makers to identify major bottlenecks to scaling-up IFA supplementation during ANC. In turn, each falter point provides possible explanations of program performance and helps strategically identify areas for improved IFA supplementation. For example, improving the delivery of IFA supplementation in Ethiopia relies on increasing access to ANC, but also on identifying and addressing program gaps in IFA supply management and health workers’ practices in order to provide quality ANC services. While every country requires a customized approach to improving IFA supplementation, the multi-country analysis conducted by SPRING is a helpful first step in identifying country bottlenecks and prioritizing interventions.Keywords: iron and folic acid, supplementation, antenatal care, micronutrient
Procedia PDF Downloads 397272 Ultrasound Disintegration as a Potential Method for the Pre-Treatment of Virginia Fanpetals (Sida hermaphrodita) Biomass before Methane Fermentation Process
Authors: Marcin Dębowski, Marcin Zieliński, Mirosław Krzemieniewski
Abstract:
As methane fermentation is a complex series of successive biochemical transformations, its subsequent stages are determined, to a various extent, by physical and chemical factors. A specific state of equilibrium is being settled in the functioning fermentation system between environmental conditions and the rate of biochemical reactions and products of successive transformations. In the case of physical factors that influence the effectiveness of methane fermentation transformations, the key significance is ascribed to temperature and intensity of biomass agitation. Among the chemical factors, significant are pH value, type, and availability of the culture medium (to put it simply: the C/N ratio) as well as the presence of toxic substances. One of the important elements which influence the effectiveness of methane fermentation is the pre-treatment of organic substrates and the mode in which the organic matter is made available to anaerobes. Out of all known and described methods for organic substrate pre-treatment before methane fermentation process, the ultrasound disintegration is one of the most interesting technologies. Investigations undertaken on the ultrasound field and the use of installations operating on the existing systems result principally from very wide and universal technological possibilities offered by the sonication process. This physical factor may induce deep physicochemical changes in ultrasonicated substrates that are highly beneficial from the viewpoint of methane fermentation processes. In this case, special role is ascribed to disintegration of biomass that is further subjected to methane fermentation. Once cell walls are damaged, cytoplasm and cellular enzymes are released. The released substances – either in dissolved or colloidal form – are immediately available to anaerobic bacteria for biodegradation. To ensure the maximal release of organic matter from dead biomass cells, disintegration processes are aimed to achieve particle size below 50 μm. It has been demonstrated in many research works and in systems operating in the technical scale that immediately after substrate supersonication the content of organic matter (characterized by COD, BOD5 and TOC indices) was increasing in the dissolved phase of sedimentation water. This phenomenon points to the immediate sonolysis of solid substances contained in the biomass and to the release of cell material, and consequently to the intensification of the hydrolytic phase of fermentation. It results in a significant reduction of fermentation time and increased effectiveness of production of gaseous metabolites of anaerobic bacteria. Because disintegration of Virginia fanpetals biomass via ultrasounds applied in order to intensify its conversion is a novel technique, it is often underestimated by exploiters of agri-biogas works. It has, however, many advantages that have a direct impact on its technological and economical superiority over thus far applied methods of biomass conversion. As for now, ultrasound disintegrators for biomass conversion are not produced on the mass-scale, but by specialized groups in scientific or R&D centers. Therefore, their quality and effectiveness are to a large extent determined by their manufacturers’ knowledge and skills in the fields of acoustics and electronic engineering.Keywords: ultrasound disintegration, biomass, methane fermentation, biogas, Virginia fanpetals
Procedia PDF Downloads 369271 PolyScan: Comprehending Human Polymicrobial Infections for Vector-Borne Disease Diagnostic Purposes
Authors: Kunal Garg, Louise Theusen Hermansan, Kanoktip Puttaraska, Oliver Hendricks, Heidi Pirttinen, Leona Gilbert
Abstract:
The Germ Theory (one infectious determinant is equal to one disease) has unarguably evolved our capability to diagnose and treat infectious diseases over the years. Nevertheless, the advent of technology, climate change, and volatile human behavior has brought about drastic changes in our environment, leading us to question the relevance of the Germ Theory in our day, i.e. will vector-borne disease (VBD) sufferers produce multiple immune responses when tested for multiple microbes? Vector diseased patients producing multiple immune responses to different microbes would evidently suggest human polymicrobial infections (HPI). Ongoing diagnostic tools are exceedingly unequipped with the current research findings that would aid in diagnosing patients for polymicrobial infections. This shortcoming has caused misdiagnosis at very high rates, consequently diminishing the patient’s quality of life due to inadequate treatment. Equipped with the state-of-art scientific knowledge, PolyScan intends to address the pitfalls in current VBD diagnostics. PolyScan is a multiplex and multifunctional enzyme linked Immunosorbent assay (ELISA) platform that can test for numerous VBD microbes and allow simultaneous screening for multiple types of antibodies. To validate PolyScan, Lyme Borreliosis (LB) and spondyloarthritis (SpA) patient groups (n = 54 each) were tested for Borrelia burgdorferi, Borrelia burgdorferi Round Body (RB), Borrelia afzelii, Borrelia garinii, and Ehrlichia chaffeensis against IgM and IgG antibodies. LB serum samples were obtained from Germany and SpA serum samples were obtained from Denmark under relevant ethical approvals. The SpA group represented chronic LB stage because reactive arthritis (SpA subtype) in the form of Lyme arthritis links to LB. It was hypothesized that patients from both the groups will produce multiple immune responses that as a consequence would evidently suggest HPI. It was also hypothesized that the multiple immune response proportion in SpA patient group would be significantly larger when compared to the LB patient group across both antibodies. It was observed that 26% LB patients and 57% SpA patients produced multiple immune responses in contrast to 33% LB patients and 30% SpA patients that produced solitary immune responses when tested against IgM. Similarly, 52% LB patients and an astounding 73% SpA patients produced multiple immune responses in contrast to 30% LB patients and 8% SpA patients that produced solitary immune responses when tested against IgG. Interestingly, IgM immune dysfunction in both the patient groups was also recorded. Atypically, 6% of the unresponsive 18% LB with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Similarly, 12% of the unresponsive 19% SpA with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Thus, results not only supported hypothesis but also suggested that IgM may atypically prevail longer than IgG. The PolyScan concept will aid clinicians to detect patients for early, persistent, late, polymicrobial, & immune dysfunction conditions linked to different VBD. PolyScan provides a paradigm shift for the VBD diagnostic industry to follow that will drastically shorten patient’s time to receive adequate treatment.Keywords: diagnostics, immune dysfunction, polymicrobial, TICK-TAG
Procedia PDF Downloads 327270 Modeling of Hot Casting Technology of Beryllium Oxide Ceramics with Ultrasonic Activation
Authors: Zamira Sattinova, Tassybek Bekenov
Abstract:
The article is devoted to modeling the technology of hot casting of beryllium oxide ceramics. The stages of ultrasonic activation of beryllium oxide slurry in the plant vessel to improve the rheological property, hot casting in the moulding cavity with cooling and solidification of the casting are described. Thermoplastic slurry (hereinafter referred to as slurry) shows the rheology of a non-Newtonian fluid with yield and plastic viscosity. Cooling-solidification of the slurry in the forming cavity occurs in the liquid, taking into account crystallization and solid state. In this work is the method of calculation of hot casting of the slurry using the method of effective molecular viscosity of viscoplastic fluid. It is shown that the slurry near the cooled wall is in a state of crystallization and plasticity, and the rest may still be in the liquid phase. Nonuniform distribution of temperature, density and concentration of kinetically free binder takes place along the cavity section. This leads to compensation of shrinkage by the influx of slurry from the liquid into the crystallization zones and plasticity of the castings. In the plasticity zone, the shrinkage determined by the concentration of kinetically free binder is compensated under the action of the pressure gradient. The solidification mechanism, as well as the mechanical behavior of the casting mass during casting, the rheological and thermophysical properties of the thermoplastic BeO slurry due to ultrasound exposure have not been well studied. Nevertheless, experimental data allow us to conclude that the effect of ultrasonic vibrations on the slurry mass leads to it: a change in structure, an increase in technological properties, a decrease in heterogeneity and a change in rheological properties. In the course of experiments, the effect of ultrasonic treatment and its duration on the change in viscosity and ultimate shear stress of the slurry depending on temperature (55-75℃) and the mass fraction of the binder (10 - 11.7%) have been studied. At the same time, changes in these properties before and after ultrasound exposure have been analyzed, as well as the nature of the flow in the system under study. The experience of operating the unit with ultrasonic impact has shown that at the same time, the casting capacity of the slurry increases by an average of 15%, and the viscosity decreases by more than half. Experimental study of physicochemical properties and phase change with simultaneous consideration of all factors affecting the quality of products in the process of continuous casting is labor-intensive. Therefore, an effective way to control the physical processes occurring in the formation of articles with predetermined properties and shapes is to simulate the process and determine its basic characteristics. The results of the calculations show the whole stage of hot casting of beryllium oxide slurry, taking into account the change in its state of aggregation. Ultrasonic treatment improves rheological properties and increases the fluidity of the slurry in the forming cavity. Calculations show the influence of velocity, temperature factors and structural data of the cavity on the cooling-solidification process of the casting. In the calculations, conditions for molding with shrinkage of the slurry by hot casting have been found, which makes it possible to obtain a solidifying product with a uniform beryllium oxide structure at the outlet of the cavity.Keywords: hot casting, thermoplastic slurry molding, shrinkage, beryllium oxide
Procedia PDF Downloads 24269 Measuring Digital Literacy in the Chilean Workforce
Authors: Carolina Busco, Daniela Osses
Abstract:
The development of digital literacy has become a fundamental element that allows for citizen inclusion, access to quality jobs, and a labor market capable of responding to the digital economy. There are no methodological instruments available in Chile to measure the workforce’s digital literacy and improve national policies on this matter. Thus, the objective of this research is to develop a survey to measure digital literacy in a sample of 200 Chilean workers. Dimensions considered in the instrument are sociodemographics, access to infrastructure, digital education, digital skills, and the ability to use e-government services. To achieve the research objective of developing a digital literacy model of indicators and a research instrument for this purpose, along with an exploratory analysis of data using factor analysis, we used an empirical, quantitative-qualitative, exploratory, non-probabilistic, and cross-sectional research design. The research instrument is a survey created to measure variables that make up the conceptual map prepared from the bibliographic review. Before applying the survey, a pilot test was implemented, resulting in several adjustments to the phrasing of some items. A validation test was also applied using six experts, including their observations on the final instrument. The survey contained 49 items that were further divided into three sets of questions: sociodemographic data; a Likert scale of four values ranked according to the level of agreement; iii) multiple choice questions complementing the dimensions. Data collection occurred between January and March 2022. For the factor analysis, we used the answers to 12 items with the Likert scale. KMO showed a value of 0.626, indicating a medium level of correlation, whereas Bartlett’s test yielded a significance value of less than 0.05 and a Cronbach’s Alpha of 0.618. Taking all factor selection criteria into account, we decided to include and analyze four factors that together explain 53.48% of the accumulated variance. We identified the following factors: i) access to infrastructure and opportunities to develop digital skills at the workplace or educational establishment (15.57%), ii) ability to solve everyday problems using digital tools (14.89%), iii) online tools used to stay connected with others (11.94%), and iv) residential Internet access and speed (11%). Quantitative results were discussed within six focus groups using heterogenic selection criteria related to the most relevant variables identified in the statistical analysis: upper-class school students; middle-class university students; Ph.D. professors; low-income working women, elderly individuals, and a group of rural workers. The digital divide and its social and economic correlations are evident in the results of this research. In Chile, the items that explain the acquisition of digital tools focus on access to infrastructure, which ultimately puts the first filter on the development of digital skills. Therefore, as expressed in the literature review, the advance of these skills is radically different when sociodemographic variables are considered. This increases socioeconomic distances and exclusion criteria, putting those who do not have these skills at a disadvantage and forcing them to seek the assistance of others.Keywords: digital literacy, digital society, workforce digitalization, digital skills
Procedia PDF Downloads 67268 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand
Authors: Wen Liu, Errol Haarhoff, Lee Beattie
Abstract:
In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.Keywords: urban intensification, sustainable development, plan making, governance and implementation
Procedia PDF Downloads 557267 Strategies for Urban-Architectural Design for the Sustainable Recovery of the Huayla Stuary in Puerto Bolivar, Machala-Ecuador
Authors: Soledad Coronel Poma, Lorena Alvarado Rodriguez
Abstract:
The purpose of this project is to design public space through urban-architectural strategies that help to the sustainable recovery of the Huayla estuary and the revival of tourism in this area. This design considers other sustainable and architectural ideas used in similar cases, along with national and international regulations for saving shorelines in danger. To understand the situation of this location, Puerto Bolivar is the main port of the Province of El Oro and of the south of the country, where 90,000 national and foreign tourists pass through all year round. For that reason, a physical-urban, social, and environmental analysis of the area was carried out through surveys and conversations with the community. This analysis showed that around 70% of people feel unsatisfied and concerned about the estuary and its surroundings. Crime, absence of green areas, bad conservation of shorelines, lack of tourists, poor commercial infrastructure, and the spread of informal commerce are the main issues to be solved. As an intervention project whose main goal is that residents and tourists have contact with native nature and enjoy doing local activities, three main strategies: mobility, ecology, and urban –architectural are proposed to recover the estuary and its surroundings. First of all, the design of this public space is based on turning the estuary location into a linear promenade that could be seen as a tourist corridor, which would help to reduce pollution, increase green spaces and improve tourism. Another strategy aims to improve the economy of the community through some local activities like fishing and sailing and the commerce of fresh seafood, both raw products and in restaurants. Furthermore, in support of the environmental approach, some houses are rebuilt as sustainable houses using local materials and rearranged into blocks closer to the commercial area. Finally, the planning incorporates the use of many plants such as palms, sameness trees, and mangroves around the area to encourage people to get in touch with nature. The results of designing this space showed an increase in the green area per inhabitant index. It went from 1.69 m²/room to 10.48 m²/room, with 12 096 m² of green corridors and the incorporation of 5000 m² of mangroves at the shoreline. Additionally, living zones also increased with the creation of green areas taking advantage of the existing nature and implementing restaurants and recreational spaces. Moreover, the relocation of houses and buildings helped to free estuary's shoreline, so people are now in more comfortable places closer to their workplaces. Finally, dock spaces are increased, reaching the capacity of the boats and canoes, helping to organize the area in the estuary. To sum up, this project searches the improvement of the estuary environment with its shoreline and surroundings that include the vegetation, infrastructure and people with their local activities, achieving a better quality of life, attraction of tourism, reduction of pollution and finally getting a full recovered estuary as a natural ecosystem.Keywords: recover, public space, stuary, sustainable
Procedia PDF Downloads 147266 The Influence of Human Movement on the Formation of Adaptive Architecture
Authors: Rania Raouf Sedky
Abstract:
Adaptive architecture relates to buildings specifically designed to adapt to their residents and their environments. To design a biologically adaptive system, we can observe how living creatures in nature constantly adapt to different external and internal stimuli to be a great inspiration. The issue is not just how to create a system that is capable of change but also how to find the quality of change and determine the incentive to adapt. The research examines the possibilities of transforming spaces using the human body as an active tool. The research also aims to design and build an effective dynamic structural system that can be applied on an architectural scale and integrate them all into the creation of a new adaptive system that allows us to conceive a new way to design, build and experience architecture in a dynamic manner. The main objective was to address the possibility of a reciprocal transformation between the user and the architectural element so that the architecture can adapt to the user, as the user adapts to architecture. The motivation is the desire to deal with the psychological benefits of an environment that can respond and thus empathize with human emotions through its ability to adapt to the user. Adaptive affiliations of kinematic structures have been discussed in architectural research for more than a decade, and these issues have proven their effectiveness in developing kinematic structures, responsive and adaptive, and their contribution to 'smart architecture'. A wide range of strategies have been used in building complex kinetic and robotic systems mechanisms to achieve convertibility and adaptability in engineering and architecture. One of the main contributions of this research is to explore how the physical environment can change its shape to accommodate different spatial displays based on the movement of the user’s body. The main focus is on the relationship between materials, shape, and interactive control systems. The intention is to develop a scenario where the user can move, and the structure interacts without any physical contact. The soft form of shifting language and interaction control technology will provide new possibilities for enriching human-environmental interactions. How can we imagine a space in which to construct and understand its users through physical gestures, visual expressions, and response accordingly? How can we imagine a space whose interaction depends not only on preprogrammed operations but on real-time feedback from its users? The research also raises some important questions for the future. What would be the appropriate structure to show physical interaction with the dynamic world? This study concludes with a strong belief in the future of responsive motor structures. We imagine that they are developing the current structure and that they will radically change the way spaces are tested. These structures have obvious advantages in terms of energy performance and the ability to adapt to the needs of users. The research highlights the interface between remote sensing and a responsive environment to explore the possibility of an interactive architecture that adapts to and responds to user movements. This study ends with a strong belief in the future of responsive motor structures. We envision that it will improve the current structure and that it will bring a fundamental change to the way in which spaces are tested.Keywords: adaptive architecture, interactive architecture, responsive architecture, tensegrity
Procedia PDF Downloads 156265 Towards a Measuring Tool to Encourage Knowledge Sharing in Emerging Knowledge Organizations: The Who, the What and the How
Authors: Rachel Barker
Abstract:
The exponential velocity in the truly knowledge-intensive world today has increasingly bombarded organizations with unfathomable challenges. Hence organizations are introduced to strange lexicons of descriptors belonging to a new paradigm of who, what and how knowledge at individual and organizational levels should be managed. Although organizational knowledge has been recognized as a valuable intangible resource that holds the key to competitive advantage, little progress has been made in understanding how knowledge sharing at individual level could benefit knowledge use at collective level to ensure added value. The research problem is that a lack of research exists to measure knowledge sharing through a multi-layered structure of ideas with at its foundation, philosophical assumptions to support presuppositions and commitment which requires actual findings from measured variables to confirm observed and expected events. The purpose of this paper is to address this problem by presenting a theoretical approach to measure knowledge sharing in emerging knowledge organizations. The research question is that despite the competitive necessity of becoming a knowledge-based organization, leaders have found it difficult to transform their organizations due to a lack of knowledge on who, what and how it should be done. The main premise of this research is based on the challenge for knowledge leaders to develop an organizational culture conducive to the sharing of knowledge and where learning becomes the norm. The theoretical constructs were derived and based on the three components of the knowledge management theory, namely technical, communication and human components where it is suggested that this knowledge infrastructure could ensure effective management. While it is realised that it might be a little problematic to implement and measure all relevant concepts, this paper presents effect of eight critical success factors (CSFs) namely: organizational strategy, organizational culture, systems and infrastructure, intellectual capital, knowledge integration, organizational learning, motivation/performance measures and innovation. These CSFs have been identified based on a comprehensive literature review of existing research and tested in a new framework adapted from four perspectives of the balanced score card (BSC). Based on these CSFs and their items, an instrument was designed and tested among managers and employees of a purposefully selected engineering company in South Africa who relies on knowledge sharing to ensure their competitive advantage. Rigorous pretesting through personal interviews with executives and a number of academics took place to validate the instrument and to improve the quality of items and correct wording of issues. Through analysis of surveys collected, this research empirically models and uncovers key aspects of these dimensions based on the CSFs. Reliability of the instrument was calculated by Cronbach’s a for the two sections of the instrument on organizational and individual levels.The construct validity was confirmed by using factor analysis. The impact of the results was tested using structural equation modelling and proved to be a basis for implementing and understanding the competitive predisposition of the organization as it enters the process of knowledge management. In addition, they realised the importance to consolidate their knowledge assets to create value that is sustainable over time.Keywords: innovation, intellectual capital, knowledge sharing, performance measures
Procedia PDF Downloads 195