Search results for: ultrasound image
181 Disaggregating Communities and the Making of Factional States: Evidence from Joint Forest Management in Sundarban, India
Authors: Amrita Sen
Abstract:
In the face of a growing insurgent movement and the perceived failure of the state and the market towards sustainable resource management, a range of decentralized forest management policies was formulated in the last two decades, which recognized the need for community representations within the statutory methods of forest management. The recognition conceded on the virtues of ecological sustainability and traditional environmental knowledge, which were considered to be the principal repositories of the forest dependent communities. The present study, in the light of empirical insights, reflects on the contemporary disjunctions between the preconceived communitarian ethic in environmentalism and the lived reality of forest based life-worlds. Many of the popular as well as dominant ideologies, which have historically shaped the conceptual and theoretical understanding of sociology, needs further perusal in the context of the emerging contours of empirical knowledge, which lends opportunities for substantive reworking and analysis. The image of the community appears to be one of those concepts, an identity which has for long defined perspectives and processes associated with people living together harmoniously in small physical spaces. Through an ethnographic account of the implementation of Joint Forest Management (JFM) in a forest fringe village in Sundarban, the study explores the ways in which the idea of ‘community’ gets transformed through the process of state-making, rendering the necessity of its departure from the standard, conventional definition of homogeneity and internal equity. The study necessitates an attention towards the anthropology of micro-politics, disaggregating an essentially constructivist anthropology of ‘collective identities’, which can render the visibility of political mobilizations plausible within the seemingly culturalist production of communities. The two critical questions that the paper seeks to ask in this context are: how the ‘local’ is constituted within community based conservation practices? Within the efforts of collaborative forest management, how accurately does the depiction of ‘indigenous environmental knowledge’, subscribe to its role of sustainable conservation practices? Reflecting on the execution of JFM in Sundarban, the study critically explores the ways in which the state ceases to be ‘trans-national’ and interacts with the rural life-worlds through its local factions. Simultaneously, the study attempts to articulate the scope of constructing a competing representation of community, shaped by increasing political negotiations and bureaucratic alignments which strains against the usual preoccupations with tradition primordiality and non material culture as well as the amorous construction of indigeneity.Keywords: community, environmentalism, JFM, state-making, identities, indigenous
Procedia PDF Downloads 200180 Facial Recognition of University Entrance Exam Candidates using FaceMatch Software in Iran
Authors: Mahshid Arabi
Abstract:
In recent years, remarkable advancements in the fields of artificial intelligence and machine learning have led to the development of facial recognition technologies. These technologies are now employed in a wide range of applications, including security, surveillance, healthcare, and education. In the field of education, the identification of university entrance exam candidates has been one of the fundamental challenges. Traditional methods such as using ID cards and handwritten signatures are not only inefficient and prone to fraud but also susceptible to errors. In this context, utilizing advanced technologies like facial recognition can be an effective and efficient solution to increase the accuracy and reliability of identity verification in entrance exams. This article examines the use of FaceMatch software for recognizing the faces of university entrance exam candidates in Iran. The main objective of this research is to evaluate the efficiency and accuracy of FaceMatch software in identifying university entrance exam candidates to prevent fraud and ensure the authenticity of individuals' identities. Additionally, this research investigates the advantages and challenges of using this technology in Iran's educational systems. This research was conducted using an experimental method and random sampling. In this study, 1000 university entrance exam candidates in Iran were selected as samples. The facial images of these candidates were processed and analyzed using FaceMatch software. The software's accuracy and efficiency were evaluated using various metrics, including accuracy rate, error rate, and processing time. The research results indicated that FaceMatch software could accurately identify candidates with a precision of 98.5%. The software's error rate was less than 1.5%, demonstrating its high efficiency in facial recognition. Additionally, the average processing time for each candidate's image was less than 2 seconds, indicating the software's high efficiency. Statistical evaluation of the results using precise statistical tests, including analysis of variance (ANOVA) and t-test, showed that the observed differences were significant, and the software's accuracy in identity verification is high. The findings of this research suggest that FaceMatch software can be effectively used as a tool for identifying university entrance exam candidates in Iran. This technology not only enhances security and prevents fraud but also simplifies and streamlines the exam administration process. However, challenges such as preserving candidates' privacy and the costs of implementation must also be considered. The use of facial recognition technology with FaceMatch software in Iran's educational systems can be an effective solution for preventing fraud and ensuring the authenticity of university entrance exam candidates' identities. Given the promising results of this research, it is recommended that this technology be more widely implemented and utilized in the country's educational systems.Keywords: facial recognition, FaceMatch software, Iran, university entrance exam
Procedia PDF Downloads 49179 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China
Abstract:
With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture
Procedia PDF Downloads 168178 A Qualitative Exploration of the Beliefs and Experiences of HIV-Related Self-Stigma Amongst Young Adults Living with HIV in Zimbabwe
Authors: Camille Rich, Nadine Ferris France, Ann Nolan, Webster Mavhu, Vongai Munatsi
Abstract:
Background and Aim: Zimbabwe has one of the highest HIV rates in the world, with a 12.7% adult prevalence rate. Young adults are a key group affected by HIV, and one-third of all new infections in Zimbabwe are amongst people ages 18-24 years. Stigma remains one of the main barriers to managing and reducing the HIV crisis, especially for young adults. There are several types of stigma, including enacted stigma, the outward discrimination towards someone and self-stigma, the negative self-judgments one has towards themselves. Self-stigma can have severe consequences, including feelings of worthlessness, shame, suicidal thoughts, and avoidance of medical help. This can have detrimental effects on those living with HIV. However, the unique beliefs and impacts of self-stigma amongst key groups living with HIV have not yet been explored. Therefore, the focus of this study is on the beliefs and experiences of HIV-related self-stigma, as experienced by young adults living in Harare, Zimbabwe. Research Methods: A qualitative approach was taken for this study, using sixteen semi-structured interviews with young adults (18-24 years) who are living with HIV in Harare. Participants were conveniently and purposefully sampled as members of Africa, an organization dedicated to young people living with HIV. Interviews were conducted over Zoom due to the COVID-19 pandemic, recorded and then coded using the software NVivo. The data was analyzed using both inductive and deductive Thematic Analysis to find common themes. Results: All of the participants experienced HIV-related self-stigma, and both beliefs and experiences were explored. These negative self-perceptions included beliefs of worthlessness, hopelessness, and negative body image. The young adults described believing they were not good enough to be around HIV negative people or that they could never be loved due to their HIV status. Developing self-stigmatizing thoughts came from internalizing negative cultural values, stereotypes about people living with HIV, and adverse experiences. Three main themes of self-stigmatizing experiences emerged: disclosure difficulties, relationship complications, and being isolated. Fear of telling someone their status, rejection in a relationship, and being excluded by others due to their HIV status contributed to their self-stigma. These experiences caused feelings of loneliness, sadness, shame, fear, and low self-worth. Conclusions: This study explored the beliefs and experiences of HIV-related self-stigma of these young adults. The emergence of negative self-perceptions demonstrated deep-rooted beliefs of HIV-related self-stigma that adversely impact the participants. The negative self-perceptions and self-stigmatizing experiences caused the participants to feel worthless, hopeless, shameful, and alone-negatively impacting their physical and mental health, personal relationships, and sense of self-identity. These results can now be used to pursue interventions to target the specific beliefs and experiences of young adults living with HIV and reduce the adverse consequences of self-stigma.Keywords: beliefs, HIV, self-stigma, stigma, Zimbabwe
Procedia PDF Downloads 116177 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 130176 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning
Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih
Abstract:
Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network
Procedia PDF Downloads 192175 Mobile Learning and Student Engagement in English Language Teaching: The Case of First-Year Undergraduate Students at Ecole Normal Superieur, Algeria
Authors: I. Tiahi
Abstract:
The aim of the current paper is to explore educational practices in contemporary Algeria. Researches explain such practices bear traditional approach and the overlooks modern teaching methods such as mobile learning. That is why the research output of examining student engagement in respect of mobile learning was obtained from the following objectives: (1) To evaluate the current practice of English language teaching within Algerian higher education institutions, (2) To explore how social constructivism theory and m-learning help students’ engagement in the classroom and (3) To explore the feasibility and acceptability of m-learning amongst institutional leaders. The methodology underpins a case study and action research. For the case study, the researcher engaged with 6 teachers, 4 institutional leaders, and 30 students subjected for semi-structured interviews and classroom observations to explore the current teaching methods for English as a foreign language. For the action research, the researcher applied an intervention course to investigate the possibility and implications for future implementation of mobile learning in higher education institutions. The results were deployed using thematic analysis. The research outcome showed that the disengagement of students in English language learning has many aspects. As seen from the interviews from the teachers, the researcher found that they do not have enough resources except for using ppt for some teacher. According to them, the teaching method they are using is mostly communicative and competency-based approach. Teachers informed that students are disengaged because they have psychological barriers. In classroom setting, the students are conscious about social approval from the peer, and thus if they are to face negative reinforcement which would damage their image, it is seen as a preventive mechanism to be scared of committing mistakes. This was also very reflective in this finding. A lot of other arguments can be given for this claim; however, in Algerian setting, it is usual practice where teachers do not provide positive reinforcement which is open up students for possible learning. Thus, in order to overcome such a psychological barrier, proper measures can be taken. On a conclusive remark, it is evident that teachers, students, and institutional leaders provided positive feedback for using mobile learning. It is not only motivating but also engaging in learning processes. Apps such as Kahoot, Padlet and Slido were well received and thus can be taken further to examine its higher impact in Algerian context. Thus, in the future, it will be important to implement m-learning effectively in higher education to transform the current traditional practices into modern, innovative and active learning. Persuasion for this change for stakeholder may be challenging; however, its long-term benefits can be reflective from the current research paper.Keywords: Algerian context, mobile learning, social constructivism, student engagement
Procedia PDF Downloads 143174 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 286173 Kitchen Bureaucracy: The Preparation of Banquets for Medieval Japanese Royalty
Authors: Emily Warren
Abstract:
Despite the growing body of research on Japanese food history, little has been written about the attitudes and perspectives premodern Japanese people held about their food, even on special celebratory days. In fact, the overall image that arises from the literature is one of ambivalence: that the medieval nobility of the Heian and Kamakura periods (795-1333) did not much care about what they ate and for that reason, food seems relatively scarce in certain historical records. This study challenges this perspective by analyzing the manuals written to guide palace management and feast preparation for royals, introducing two of the sources into English for the first time. This research is primarily based on three manuals that address different aspects of royal food culture and preparation. The Chujiruiki, or Record of the Palace Kitchens (1295), is a fragmentary manual written by a bureaucrat in charge of the main palace kitchen office. This document collection details the utensils, furnishing, and courses that officials organized for the royals’ two daily meals in the morning (asagarei gozen) and in the afternoon (hiru gozen) when they enjoyed seven courses, each one carefully cooked and plated. The orchestration of daily meals and frequent banquets would have been complicated affairs for those preparing the tableware and food, thus requiring texts like the Chûjiruiki, as well as another manual, the Nicchûgyôji (11th c.), or The Daily Functions. Because of the complex coordination between various kitchen-related bureaucratic offices, kitchen officials endeavored to standardize the menus and place settings depending on the time of year, religious abstinence days, and available ingredients flowing into the capital as taxes. For the most important annual banquets and rites celebrating deities and the royal family, kitchen officials would likely refer to the Engi Shiki (927), or Protocols of the Engi Era, for details on offerings, servant payments, and menus. This study proposes that many of the great feast events, and indeed even daily meals at the palace, were so standardized and carefully planned for repetition that there would have been little need for the contents of such feasts to be detailed in diaries or novels—places where historians have noted a lack of the mention of food descriptions. These descriptions were not included for lack of interest on the part of the nobility, but rather because knowledge of what would be served at banquets and feasts would be considered a matter-of-course in the same way that a modern American would likely not need to state the menu of a traditional Thanksgiving meal to an American audience. Where food was concerned, novelty more so than tradition prompted a response in personal records, like diaries.Keywords: banquets, bureaucracy, Engi shiki, Japanese food
Procedia PDF Downloads 112172 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture
Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko
Abstract:
Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.Keywords: classification, feature selection, texture analysis, tree algorithms
Procedia PDF Downloads 182171 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing
Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel
Abstract:
There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.Keywords: climate change, resilience, remote sensing, demographic and health surveys
Procedia PDF Downloads 166170 Motivation of Doctors and its Impact on the Quality of Working Life
Authors: E. V. Fakhrutdinova, K. R. Maksimova, P. B. Chursin
Abstract:
At the present stage of the society progress the health care is an integral part of both the economic system and social, while in the second case the medicine is a major component of a number of basic and necessary social programs. Since the foundation of the health system are highly qualified health professionals, it is logical proposition that increase of doctor`s professionalism improves the effectiveness of the system as a whole. Professionalism of the doctor is a collection of many components, essential role played by such personal-psychological factors as honesty, willingness and desire to help people, and motivation. A number of researchers consider motivation as an expression of basic human needs that have passed through the “filter” which is a worldview and values learned in the process of socialization by the individual, to commit certain actions designed to achieve the expected result. From this point of view a number of researchers propose the following classification of highly skilled employee’s needs: 1. the need for confirmation the competence (setting goals that meet the professionalism and receipt of positive emotions in their decision), 2. The need for independence (the ability to make their own choices in contentious situations arising in the process carry out specialist functions), 3. The need for ownership (in the case of health care workers, to the profession and accordingly, high in the eyes of the public status of the doctor). Nevertheless, it is important to understand that in a market economy a significant motivator for physicians (both legal and natural persons) is to maximize its own profits. In the case of health professionals duality motivational structure creates an additional contrast, as in the public mind the image of the ideal physician; usually a altruistically minded person thinking is not primarily about their own benefit, and to assist others. In this context, the question of the real motivation of health workers deserves special attention. The survey conducted by the American researcher Harrison Terni for the magazine "Med Tech" in 2010 revealed the opinion of more than 200 medical students starting courses, and the primary motivation in a profession choice is "desire to help people", only 15% said that they want become a doctor, "to earn a lot". From the point of view of most of the classical theories of motivation this trend can be called positive, as intangible incentives are more effective. However, it is likely that over time the opinion of the respondents may change in the direction of mercantile motives. Thus, it is logical to assume that well-designed system of motivation of doctor`s labor should be based on motivational foundations laid during training in higher education.Keywords: motivation, quality of working life, health system, personal-psychological factors, motivational structure
Procedia PDF Downloads 360169 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator
Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur
Abstract:
Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature
Procedia PDF Downloads 111168 A Case Study of Remote Location Viewing, and Its Significance in Mobile Learning
Authors: James Gallagher, Phillip Benachour
Abstract:
As location aware mobile technologies become ever more omnipresent, the prospect of exploiting their context awareness to enforce learning approaches thrives. Utilizing the growing acceptance of ubiquitous computing, and the steady progress both in accuracy and battery usage of pervasive devices, we present a case study of remote location viewing, how the application can be utilized to support mobile learning in situ using an existing scenario. Through the case study we introduce a new innovative application: Mobipeek based around a request/response protocol for the viewing of a remote location and explore how this can apply both as part of a teacher lead activity and informal learning situations. The system developed allows a user to select a point on a map, and send a request. Users can attach messages alongside time and distance constraints. Users within the bounds of the request can respond with an image, and accompanying message, providing context to the response. This application can be used alongside a structured learning activity such as the use of mobile phone cameras outdoors as part of an interactive lesson. An example of a learning activity would be to collect photos in the wild about plants, vegetation, and foliage as part of a geography or environmental science lesson. Another example could be to take photos of architectural buildings and monuments as part of an architecture course. These images can be uploaded then displayed back in the classroom for students to share their experiences and compare their findings with their peers. This can help to fosters students’ active participation while helping students to understand lessons in a more interesting and effective way. Mobipeek could augment the student learning experience by providing further interaction with other peers in a remote location. The activity can be part of a wider study between schools in different areas of the country enabling the sharing and interaction between more participants. Remote location viewing can be used to access images in a specific location. The choice of location will depend on the activity and lesson. For example architectural buildings of a specific period can be shared between two or more cities. The augmentation of the learning experience can be manifested in the different contextual and cultural influences as well as the sharing of images from different locations. In addition to the implementation of Mobipeek, we strive to analyse this application, and a subset of other possible and further solutions targeted towards making learning more engaging. Consideration is given to the benefits of such a system, privacy concerns, and feasibility of widespread usage. We also propose elements of “gamification”, in an attempt to further the engagement derived from such a tool and encourage usage. We conclude by identifying limitations, both from a technical, and a mobile learning perspective.Keywords: context aware, location aware, mobile learning, remote viewing
Procedia PDF Downloads 292167 A Discourse Analysis of Syrian Refugee Representations in Canadian News Media
Authors: Pamela Aimee Rigor
Abstract:
This study aims to examine the representation of Syrian refugees resettled in Vancouver and the Lower Mainland in local community and major newspapers. While there is strong support for immigration in Canada, public opinion towards refugees and asylum seekers is a bit more varied. Concerns about the legitimacy of refugee claims are among the common concerns of Canadians, and hateful or negative narratives are still present in Canadian media discourse which affects how people view refugees. To counter the narratives, these Syrian refugees must publicly declare how grateful they are because they are resettled in Canada. The dominant media discourse is that these refugees should be grateful as they have been graciously accepted by Canada and Canadians, once again upholding the image of Canada being a generous and humanitarian nation. The study examined the representation of Syrian refugees and the Syrian refugee resettlement in Canadian newspapers from September 2015 to October 2017 – around the time Prime Minister Trudeau came into power up until the present. Using a combination of content and discourse analysis, it aimed to uncover how local community and major newspapers in Vancouver covered the Syrian refugee ‘crisis’ – more particularly, the arrival and resettlement of the refugees in the country. Using the qualitative data analysis software Nvivo 12, the newspapers were analyzed and sorted into themes. Based on the initial findings, the discourse of Canada being a humanitarian country and Canadians being generous, as well as the idea of Syrian refugees having to publicly announce how grateful they are, is still present in the local community newspapers. This seems to be done to counter the hateful narratives of citizens who might view them as people who are abusing help provided by the community or the services provided by the government. However, compared to the major and national newspapers in Canada, many these local community newspapers are very inclusive of Syrian refugee voices. Most of the News and Community articles interview Syrian refugees and ask them their personal stories of plight, survival, resettlement and starting a ‘new life’ in Canada. They are not seen as potential threats nor are they dismissed – the refugees were named and were allowed to share their personal experiences in these news articles. These community newspapers, even though their representations are far from perfect, actually address some aspects of the refugee resettlement issue and respond to their community’s needs. There are quite a number of news articles that announce community meetings and orientations about the Syrian refugee crisis, ways to help in the resettlement process, as well as community fundraising activities to help sponsor refugees or resettle newly arrived refugees. This study aims to promote awareness of how these individuals are socially constructed so we can, in turn, be aware of the certain biases and stereotypes present, and its implications on refugee laws and public response to the issue.Keywords: forced migration and conflict, media representations, race and multiculturalism, refugee studies
Procedia PDF Downloads 253166 Current Status of Scaled-Up Synthesis/Purification and Characterization of a Potentially Translatable Tantalum Oxide Nanoparticle Intravenous CT Contrast Agent
Authors: John T. Leman, James Gibson, Peter J. Bonitatibus
Abstract:
There have been no potential clinically translatable developments of intravenous CT contrast materials over decades, and iodinated contrast agents (ICA) remain the only FDA-approved media for CT. Small molecule ICA used to highlight vascular anatomy have weak CT signals in large-to-obese patients due to their rapid redistribution from plasma into interstitial fluid, thereby diluting their intravascular concentration, and because of a mismatch of iodine’s K-edge and the high kVp settings needed to image this patient population. The use of ICA is also contraindicated in a growing population of renally impaired patients who are hypersensitive to these contrast agents; a transformative intravenous contrast agent with improved capabilities is urgently needed. Tantalum oxide nanoparticles (TaO NPs) with zwitterionic siloxane polymer coatings have high potential as clinically translatable general-purpose CT contrast agents because of (1) substantially improved imaging efficacy compared to ICA in swine/phantoms emulating medium-sized and larger adult abdomens and superior thoracic vascular contrast enhancement of thoracic arteries and veins in rabbit, (2) promising biological safety profiles showing near-complete renal clearance and low tissue retention at 3x anticipated clinical dose (ACD), and (3) clinically acceptable physiochemical parameters as concentrated bulk solutions(250-300 mgTa/mL). Here, we review requirements for general-purpose intravenous CT contrast agents in terms of patient safety, X-ray attenuating properties and contrast-producing capabilities, and physicochemical and pharmacokinetic properties. We report the current status of a TaO NP-based contrast agent, including chemical process technology developments and results of newly defined scaled-up processes for NP synthesis and purification, yielding reproducible formulations with appropriate size and concentration specifications. We discuss recent results of recent pre-clinical in vitro immunology, non-GLP high dose tolerability in rats (10x ACD), non-GLP long-term biodistribution in rats at 3x ACD, and non-GLP repeat dose in rats at ACD. We also include a discussion of NP characterization, in particular size-stability testing results under accelerated conditions (37C), and insights into TaO NP purity, surface structure, and bonding of the zwitterionic siloxane polymer coating by multinuclear (1H, 13C, 29Si) and multidimensional (2D) solution NMR spectroscopy.Keywords: nanoparticle, imaging, diagnostic, process technology, nanoparticle characterization
Procedia PDF Downloads 39165 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 225164 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India
Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar
Abstract:
The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose
Procedia PDF Downloads 259163 Air–Water Two-Phase Flow Patterns in PEMFC Microchannels
Authors: Ibrahim Rassoul, A. Serir, E-K. Si Ahmed, J. Legrand
Abstract:
The acronym PEM refers to Proton Exchange Membrane or alternatively Polymer Electrolyte Membrane. Due to its high efficiency, low operating temperature (30–80 °C), and rapid evolution over the past decade, PEMFCs are increasingly emerging as a viable alternative clean power source for automobile and stationary applications. Before PEMFCs can be employed to power automobiles and homes, several key technical challenges must be properly addressed. One technical challenge is elucidating the mechanisms underlying water transport in and removal from PEMFCs. On one hand, sufficient water is needed in the polymer electrolyte membrane or PEM to maintain sufficiently high proton conductivity. On the other hand, too much liquid water present in the cathode can cause “flooding” (that is, pore space is filled with excessive liquid water) and hinder the transport of the oxygen reactant from the gas flow channel (GFC) to the three-phase reaction sites. The experimental transparent fuel cell used in this work was designed to represent actual full scale of fuel cell geometry. According to the operating conditions, a number of flow regimes may appear in the microchannel: droplet flow, blockage water liquid bridge /plug (concave and convex forms), slug/plug flow and film flow. Some of flow patterns are new, while others have been already observed in PEMFC microchannels. An algorithm in MATLAB was developed to automatically determine the flow structure (e.g. slug, droplet, plug, and film) of detected liquid water in the test microchannels and yield information pertaining to the distribution of water among the different flow structures. A video processing algorithm was developed to automatically detect dynamic and static liquid water present in the gas channels and generate relevant quantitative information. The potential benefit of this software allows the user to obtain a more precise and systematic way to obtain measurements from images of small objects. The void fractions are also determined based on images analysis. The aim of this work is to provide a comprehensive characterization of two-phase flow in an operating fuel cell which can be used towards the optimization of water management and informs design guidelines for gas delivery microchannels for fuel cells and its essential in the design and control of diverse applications. The approach will combine numerical modeling with experimental visualization and measurements.Keywords: polymer electrolyte fuel cell, air-water two phase flow, gas diffusion layer, microchannels, advancing contact angle, receding contact angle, void fraction, surface tension, image processing
Procedia PDF Downloads 313162 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn
Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh
Abstract:
It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.Keywords: burn, degradation, postmortem interval, troponin-T
Procedia PDF Downloads 451161 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 38160 Convective Boiling of CO₂/R744 in Macro and Micro-Channels
Authors: Adonis Menezes, J. C. Passos
Abstract:
The current panorama of technology in heat transfer and the scarcity of information about the convective boiling of CO₂ and hydrocarbon in small diameter channels motivated the development of this work. Among non-halogenated refrigerants, CO₂/ R744 has distinct thermodynamic properties compared to other fluids. The R744 presents significant differences in operating pressures and temperatures, operating at higher values compared to other refrigerants, and this represents a challenge for the design of new evaporators, as the original systems must normally be resized to meet the specific characteristics of the R744, which creates the need for a new design and optimization criteria. To carry out the convective boiling tests of CO₂, an experimental apparatus capable of storing (m= 10kg) of saturated CO₂ at (T = -30 ° C) in an accumulator tank was used, later this fluid was pumped using a positive displacement pump with three pistons, and the outlet pressure was controlled and could reach up to (P = 110bar). This high-pressure saturated fluid passed through a Coriolis type flow meter, and the mass velocities varied between (G = 20 kg/m².s) up to (G = 1000 kg/m².s). After that, the fluid was sent to the first test section of circular cross-section in diameter (D = 4.57mm), where the inlet and outlet temperatures and pressures, were controlled and the heating was promoted by the Joule effect using a source of direct current with a maximum heat flow of (q = 100 kW/m²). The second test section used a cross-section with multi-channels (seven parallel channels) with a square cross-section of (D = 2mm) each; this second test section has also control of temperature and pressure at the inlet and outlet as well as for heating a direct current source was used, with a maximum heat flow of (q = 20 kW/m²). The fluid in a biphasic situation was directed to a parallel plate heat exchanger so that it returns to the liquid state, thus being able to return to the accumulator tank, continuing the cycle. The multi-channel test section has a viewing section; a high-speed CMOS camera was used for image acquisition, where it was possible to view the flow patterns. The experiments carried out and presented in this report were conducted in a rigorous manner, enabling the development of a database on the convective boiling of the R744 in macro and micro channels. The analysis prioritized the processes from the beginning of the convective boiling until the drying of the wall in a subcritical regime. The R744 resurfaces as an excellent alternative to chlorofluorocarbon refrigerants due to its negligible ODP (Ozone Depletion Potential) and GWP (Global Warming Potential) rates, among other advantages. The results found in the experimental tests were very promising for the use of CO₂ in micro-channels in convective boiling and served as a basis for determining the flow pattern map and correlation for determining the heat transfer coefficient in the convective boiling of CO₂.Keywords: convective boiling, CO₂/R744, macro-channels, micro-channels
Procedia PDF Downloads 145159 The Temporal Implications of Spatial Prospects
Authors: Zhuo Job Chen, Kevin Nute
Abstract:
The work reported examines potential linkages between spatial and temporal prospects, and more specifically, between variations in the spatial depth and foreground obstruction of window views, and observers’ sense of connection to the future. It was found that external views from indoor spaces were strongly associated with a sense of the future, that partially obstructing such a view with foreground objects significantly reduced its association with the future, and replacing it with a pictorial representation of the same scene (with no real actual depth) removed most of its temporal association. A lesser change in the spatial depth of the view, however, had no apparent effect on association with the future. While the role of spatial depth has still to be confirmed, the results suggest that spatial prospects directly affect temporal ones. The word “prospect” typifies the overlapping of the spatial and temporal in most human languages. It originated in classical times as a purely spatial term, but in the 16th century took on the additional temporal implication of an imagined view ahead, of the future. The psychological notion of prospection, then, has its distant origins in a spatial analogue. While it is not yet proven that space directly structures our processing of time at a physiological level, it is generally agreed that it commonly does so conceptually. The mental representation of possible futures has been a central part of human survival as a species (Boyer, 2008; Suddendorf & Corballis, 2007). A sense of the future seems critical not only practically, but also psychologically. It has been suggested, for example, that lack of a positive image of the future may be an important contributing cause of depression (Beck, 1974; Seligman, 2016). Most people in the developed world now spend more than 90% of their lives indoors. So any direct link between external views and temporal prospects could have important implications for both human well-being and building design. We found that the ability to see what lies in front of us spatially was strongly associated with a sense of what lies ahead temporally. Partial obstruction of a view was found to significantly reduce that sense connection to the future. Replacing a view with a flat pictorial representation of the same scene removed almost all of its connection with the future, but changing the spatial depth of a real view appeared to have no significant effect. While foreground obstructions were found to reduce subjects’ sense of connection to the future, they increased their sense of refuge and security. Consistent with Prospect and Refuge theory, an ideal environment, then, would seem to be one in which we can “see without being seen” (Lorenz, 1952), specifically one that conceals us frontally from others, without restricting our own view. It is suggested that these optimal conditions might be translated architecturally as screens, the apertures of which are large enough for a building occupant to see through unobstructed from close by, but small enough to conceal them from the view of someone looking from a distance outside.Keywords: foreground obstructions, prospection, spatial depth, window views
Procedia PDF Downloads 126158 Socially Sustainable Urban Rehabilitation Projects: Case Study of Ortahisar, Trabzon
Authors: Elif Berna Var
Abstract:
Cultural, physical, socio-economic, or politic changes occurred in urban areas might be resulted in the decaying period which may cause social problems. As a solution to that, urban renewal projects have been used in European countries since World War II whereas they have gained importance in Turkey after the 1980s. The first attempts were mostly related to physical or economic aspects which caused negative effects on social pattern later. Thus, social concerns have also started to include in renewal processes in developed countries. This integrative approach combining social, physical, and economic aspects promotes creating more sustainable neighbourhoods for both current and future generations. However, it is still a new subject for developing countries like Turkey. Concentrating on Trabzon-Turkey, this study highlights the importance of socially sustainable urban renewal processes especially in historical neighbourhoods where protecting the urban identity of the area is vital, as well as social structure, to create sustainable environments. Being in the historic city centre and having remarkable traditional houses, Ortahisar is an important image for Trabzon. Because of the fact that architectural and historical pattern of the area is still visible but need rehabilitations, it is preferred to use 'urban rehabilitation' as a way of urban renewal method for this study. A project is developed by the local government to create a secondary city centre and a new landmark for the city. But it is still ambiguous if this project can provide social sustainability of area which is one of the concerns of the research. In the study, it is suggested that social sustainability of an area can be achieved by several factors. In order to determine the factors affecting the social sustainability of an urban rehabilitation project, previous studies have been analysed and some common features are attempted to define. To achieve this, firstly, several analyses are conducted to find out social structure of Ortahisar. Secondly, structured interviews are implemented to 150 local people which aims to measure satisfaction level, awareness, the expectation of them, and to learn their demographical background in detail. Those data are used to define the critical factors for a more socially sustainable neighbourhood in Ortahisar. Later, the priority of those factors is asked to 50 experts and 150 local people to compare their attitudes and to find common criterias. According to the results, it can be said that social sustainability of Ortahisar neighbourhood can be improved by considering various factors like quality of urban areas, demographical factors, public participation, social cohesion and harmony, proprietorial factors, facilities of education and employment. In the end, several suggestions are made for Ortahisar case to promote more socially sustainable urban neighbourhood. As a pilot study highlighting the importance of social sustainability, it is hoped that this attempt might be the contributory effect on achieving more socially sustainable urban rehabilitation projects in Turkey.Keywords: urban rehabilitation, social sustainability, Trabzon, Turkey
Procedia PDF Downloads 377157 Deep Convolutional Neural Network for Detection of Microaneurysms in Retinal Fundus Images at Early Stage
Authors: Goutam Kumar Ghorai, Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, G. Sarkar, Ashis K. Dhara
Abstract:
Diabetes mellitus is one of the most common chronic diseases in all countries and continues to increase in numbers significantly. Diabetic retinopathy (DR) is damage to the retina that occurs with long-term diabetes. DR is a major cause of blindness in the Indian population. Therefore, its early diagnosis is of utmost importance towards preventing progression towards imminent irreversible loss of vision, particularly in the huge population across rural India. The barriers to eye examination of all diabetic patients are socioeconomic factors, lack of referrals, poor access to the healthcare system, lack of knowledge, insufficient number of ophthalmologists, and lack of networking between physicians, diabetologists and ophthalmologists. A few diabetic patients often visit a healthcare facility for their general checkup, but their eye condition remains largely undetected until the patient is symptomatic. This work aims to focus on the design and development of a fully automated intelligent decision system for screening retinal fundus images towards detection of the pathophysiology caused by microaneurysm in the early stage of the diseases. Automated detection of microaneurysm is a challenging problem due to the variation in color and the variation introduced by the field of view, inhomogeneous illumination, and pathological abnormalities. We have developed aconvolutional neural network for efficient detection of microaneurysm. A loss function is also developed to handle severe class imbalance due to very small size of microaneurysms compared to background. The network is able to locate the salient region containing microaneurysms in case of noisy images captured by non-mydriatic cameras. The ground truth of microaneurysms is created by expert ophthalmologists for MESSIDOR database as well as private database, collected from Indian patients. The network is trained from scratch using the fundus images of MESSIDOR database. The proposed method is evaluated on DIARETDB1 and the private database. The method is successful in detection of microaneurysms for dilated and non-dilated types of fundus images acquired from different medical centres. The proposed algorithm could be used for development of AI based affordable and accessible system, to provide service at grass root-level primary healthcare units spread across the country to cater to the need of the rural people unaware of the severe impact of DR.Keywords: retinal fundus image, deep convolutional neural network, early detection of microaneurysms, screening of diabetic retinopathy
Procedia PDF Downloads 145156 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images
Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso
Abstract:
Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence
Procedia PDF Downloads 23155 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation
Authors: Ali Ashtiani, Hamid Shirazi
Abstract:
This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.Keywords: airport pavement management, crack density, pavement evaluation, pavement management
Procedia PDF Downloads 186154 The Impact of Emotional Intelligence on Organizational Performance
Authors: El Ghazi Safae, Cherkaoui Mounia
Abstract:
Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.Keywords: emotions, performance, intelligence, firms
Procedia PDF Downloads 109153 Extra Skin Removal Surgery and Its Effects: A Comprehensive Review
Authors: Rebin Mzhda Mohammed, Hoshmand Ali Hama Agha
Abstract:
Excess skin, often consequential to substantial weight loss or the aging process, introduces physical discomfort, obstructs daily activities, and undermines an individual's self-esteem. As these challenges become increasingly prevalent, the need to explore viable solutions grows in significance. Extra skin removal surgery, colloquially known as body contouring surgery, has emerged as a compelling intervention to ameliorate the physical and psychological burdens of excess skin. This study undertakes a comprehensive review to illuminate the intricacies of extra skin removal surgery, encompassing its diverse procedures, associated risks, benefits, and psychological implications on patients. The methodological approach adopted involves a systematic and exhaustive review of pertinent scholarly literature sourced from reputable databases, including PubMed, Google Scholar, and specialized cosmetic surgery journals. Articles are meticulously curated based on their relevance, credibility, and recency. Subsequently, data from these sources are synthesized and categorized, facilitating a comprehensive understanding of the subject matter. Qualitative analysis serves to unravel the nuanced psychological effects, while quantitative data, where available, are harnessed to underpin the study's conclusions. In terms of major findings, the research underscores the manifold advantages of extra skin removal surgery. Patients experience a notable improvement in physical comfort, amplified mobility, enhanced self-confidence, and a newfound ability to don clothing comfortably. Nonetheless, the benefits are juxtaposed with potential risks, encompassing infection, scarring, hematoma, delayed healing, and the challenge of achieving symmetry. A salient discovery is the profound psychological impact of the surgery, as patients consistently report elevated body image satisfaction, heightened self-esteem, and a substantial enhancement in overall quality of life. In summation, this research accentuates the pivotal role of extra skin removal surgery in ameliorating the intricate interplay of physical and psychological difficulties posed by excess skin. By elucidating the diverse procedures, associated risks, and psychological outcomes, the study contributes to a comprehensive and informed comprehension of the surgery's multifaceted effects. Therefore, individuals contemplating this transformative surgical option are equipped with comprehensive insights, ultimately fostering informed decision-making, guided by the expertise of medical professionals.Keywords: extra skin removal surgery, body contouring, abdominoplasty, brachioplasty, thigh lift, body lift, benefits, risks, psychological effects
Procedia PDF Downloads 68152 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 188