Search results for: tree search algorithm
372 Geographic Information Systems and a Breath of Opportunities for Supply Chain Management: Results from a Systematic Literature Review
Authors: Anastasia Tsakiridi
Abstract:
Geographic information systems (GIS) have been utilized in numerous spatial problems, such as site research, land suitability, and demographic analysis. Besides, GIS has been applied in scientific fields like geography, health, and economics. In business studies, GIS has been used to provide insights and spatial perspectives in demographic trends, spending indicators, and network analysis. To date, the information regarding the available usages of GIS in supply chain management (SCM) and how these analyses can benefit businesses is limited. A systematic literature review (SLR) of the last 5-year peer-reviewed academic literature was conducted, aiming to explore the existing usages of GIS in SCM. The searches were performed in 3 databases (Web of Science, ProQuest, and Business Source Premier) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. The analysis resulted in 79 papers. The results indicate that the existing GIS applications used in SCM were in the following domains: a) network/ transportation analysis (in 53 of the papers), b) location – allocation site search/ selection (multiple-criteria decision analysis) (in 45 papers), c) spatial analysis (demographic or physical) (in 34 papers), d) combination of GIS and supply chain/network optimization tools (in 32 papers), and e) visualization/ monitoring or building information modeling applications (in 8 papers). An additional categorization of the literature was conducted by examining the usage of GIS in the supply chain (SC) by the business sectors, as indicated by the volume of the papers. The results showed that GIS is mainly being applied in the SC of the biomass biofuel/wood industry (33 papers). Other industries that are currently utilizing GIS in their SC were the logistics industry (22 papers), the humanitarian/emergency/health care sector (10 papers), the food/agro-industry sector (5 papers), the petroleum/ coal/ shale gas sector (3 papers), the faecal sludge sector (2 papers), the recycle and product footprint industry (2 papers), and the construction sector (2 papers). The results were also presented by the geography of the included studies and the GIS software used to provide critical business insights and suggestions for future research. The results showed that research case studies of GIS in SCM were conducted in 26 countries (mainly in the USA) and that the most prominent GIS software provider was the Environmental Systems Research Institute’s ArcGIS (in 51 of the papers). This study is a systematic literature review of the usage of GIS in SCM. The results showed that the GIS capabilities could offer substantial benefits in SCM decision-making by providing key insights to cost minimization, supplier selection, facility location, SC network configuration, and asset management. However, as presented in the results, only eight industries/sectors are currently using GIS in their SCM activities. These findings may offer essential tools to SC managers who seek to optimize the SC activities and/or minimize logistic costs and to consultants and business owners that want to make strategic SC decisions. Furthermore, the findings may be of interest to researchers aiming to investigate unexplored research areas where GIS may improve SCM.Keywords: supply chain management, logistics, systematic literature review, GIS
Procedia PDF Downloads 141371 A Multi-criteria Decision Method For The Recruitment Of Academic Personnel Based On The Analytical Hierarchy Process And The Delphi Method In A Neutrosophic Environment (Full Text)
Authors: Antonios Paraskevas, Michael Madas
Abstract:
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes on the multi-criteria nature of the problem and on how decision makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of significant degree of ambiguity and indeterminacy observed in decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method to a real problem of academic personnel selection, having as main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherit ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.Keywords: analytical hierarchy process, delphi method, multi-criteria decision maiking method, neutrosophic set theory, personnel recruitment
Procedia PDF Downloads 199370 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 88369 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction
Authors: Talal Alsulaiman, Khaldoun Khashanah
Abstract:
In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks
Procedia PDF Downloads 354368 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.Keywords: DTM, Unmanned Aerial Vehicle (UAV), uniform, random, kriging
Procedia PDF Downloads 153367 The Use of Vasopressin in the Management of Severe Traumatic Brain Injury: A Narrative Review
Authors: Nicole Selvi Hill, Archchana Radhakrishnan
Abstract:
Introduction: Traumatic brain injury (TBI) is a leading cause of mortality among trauma patients. In the management of TBI, the main principle is avoiding cerebral ischemia, as this is a strong determiner of neurological outcomes. The use of vasoactive drugs, such as vasopressin, has an important role in maintaining cerebral perfusion pressure to prevent secondary brain injury. Current guidelines do not suggest a preferred vasoactive drug to administer in the management of TBI, and there is a paucity of information on the therapeutic potential of vasopressin following TBI. Vasopressin is also an endogenous anti-diuretic hormone (AVP), and pathways mediated by AVP play a large role in the underlying pathological processes of TBI. This creates an overlap of discussion regarding the therapeutic potential of vasopressin following TBI. Currently, its popularity lies in vasodilatory and cardiogenic shock in the intensive care setting, with increasing support for its use in haemorrhagic and septic shock. Methodology: This is a review article based on a literature review. An electronic search was conducted via PubMed, Cochrane, EMBASE, and Google Scholar. The aim was to identify clinical studies looking at the therapeutic administration of vasopressin in severe traumatic brain injury. The primary aim was to look at the neurological outcome of patients. The secondary aim was to look at surrogate markers of cerebral perfusion measurements, such as cerebral perfusion pressure, cerebral oxygenation, and cerebral blood flow. Results: Eight papers were included in the final number. Three were animal studies; five were human studies, comprised of three case reports, one retrospective review of data, and one randomised control trial. All animal studies demonstrated the benefits of vasopressors in TBI management. One animal study showed the superiority of vasopressin in reducing intracranial pressure and increasing cerebral oxygenation over a catecholaminergic vasopressor, phenylephrine. All three human case reports were supportive of vasopressin as a rescue therapy in catecholaminergic-resistant hypotension. The retrospective review found vasopressin did not increase cerebral oedema in TBI patients compared to catecholaminergic vasopressors; and demonstrated a significant reduction in the requirements of hyperosmolar therapy in patients that received vasopressin. The randomised control trial results showed no significant differences in primary and secondary outcomes between TBI patients receiving vasopressin versus those receiving catecholaminergic vasopressors. Apart from the randomised control trial, the studies included are of low-level evidence. Conclusion: Studies favour vasopressin within certain parameters of cerebral function compared to control groups. However, the neurological outcomes of patient groups are not known, and animal study results are difficult to extrapolate to humans. It cannot be said with certainty whether vasopressin’s benefits stand above usage of other vasoactive drugs due to the weaknesses of the evidence. Further randomised control trials, which are larger, standardised, and rigorous, are required to improve knowledge in this field.Keywords: catecholamines, cerebral perfusion pressure, traumatic brain injury, vasopressin, vasopressors
Procedia PDF Downloads 66366 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework
Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard
Abstract:
Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health
Procedia PDF Downloads 139365 Impact of Transportation on Access to Reproductive and Maternal Health Services in Northeast Cambodia: A Policy Brief
Authors: Zaman Jawahar, Anne Rouve-Khiev, Elizabeth Hoban, Joanne Williams
Abstract:
Ensuring access to timely obstetric care is essential to prevent maternal deaths. Geographical barriers pose significant challenges for women accessing quality reproductive and maternal health services in rural Cambodia. This policy brief affirms the need to address the issue of transportation and cost (direct and indirect) as critical barriers to accessing reproductive and maternal health (RMH) services in four provinces in Northeast Cambodia (Kratie, Ratanak Kiri, Mondul Kiri, Stung Treng). A systemic search of the literature identified 1,116 articles, and only ten articles from low-and-middle-income countries met the inclusion criteria. The ten articles reported on transportation and cost related to accessing RMH services. In addition, research findings from Partnering to Save Lives (PSL) studies in the four provinces were included in the analysis. Thematic data analysis using the information in the ten articles and PSL research findings was conducted, and the findings are presented in this paper. The key findings are the critical barriers to accessing RMH services in the four provinces because women experience: 1) difficulties finding affordable transportation; 2) lack of available and accessible transportation; 3) greater distance and traveling time to services; 4) poor geographical terrain and; 5) higher opportunity costs. Distance and poverty pose a double burden for the women accessing RMH services making a facility-based delivery less feasible compared to home delivery. Furthermore, indirect and hidden costs associated with institutional delivery may have an impact on women’s decision to seek RMH care. Existing health financing schemes in Cambodia such as the Health Equity Fund (HEF) and the Voucher Scheme contributed to the solution but have also shown some limitations. These schemes contribute to improving access to RMH services for the poorest group, but the barrier of transportation costs remains. In conclusion, initiatives that are proven to be effective in the Cambodian context should continue or be expanded in conjunction with the HEF, and special consideration should be given to communities living in geographically remote regions and difficult to access areas. The following strategies are recommended: 1) maintain and further strengthen transportation support in the HEF scheme; 2) expand community-based initiatives such as Community Managed Health Equity Funds and Village Saving Loans Associations; 3) establish maternity waiting homes; and 4) include antenatal and postnatal care in the provision of integrated outreach services. This policy brief can be used to inform key policymakers and provide evidence that can assist them to develop strategies to increase poor women’s access to RMH services in low-income settings, taking into consideration the geographic distance and other indirect costs associated with a facility-based delivery.Keywords: access, barriers, northeast Cambodia, reproductive and maternal health service, transportation and cost
Procedia PDF Downloads 140364 The Optimal Irrigation in the Mitidja Plain
Authors: Gherbi Khadidja
Abstract:
In the Mediterranean region, water resources are limited and very unevenly distributed in space and time. The main objective of this project is the development of a wireless network for the management of water resources in northern Algeria, the Mitidja plain, which helps farmers to irrigate in the most optimized way and solve the problem of water shortage in the region. Therefore, we will develop an aid tool that can modernize and replace some traditional techniques, according to the real needs of the crops and according to the soil conditions as well as the climatic conditions (soil moisture, precipitation, characteristics of the unsaturated zone), These data are collected in real-time by sensors and analyzed by an algorithm and displayed on a mobile application and the website. The results are essential information and alerts with recommendations for action to farmers to ensure the sustainability of the agricultural sector under water shortage conditions. In the first part: We want to set up a wireless sensor network, for precise management of water resources, by presenting another type of equipment that allows us to measure the water content of the soil, such as the Watermark probe connected to the sensor via the acquisition card and an Arduino Uno, which allows collecting the captured data and then program them transmitted via a GSM module that will send these data to a web site and store them in a database for a later study. In a second part: We want to display the results on a website or a mobile application using the database to remotely manage our smart irrigation system, which allows the farmer to use this technology and offers the possibility to the growers to access remotely via wireless communication to see the field conditions and the irrigation operation, at home or at the office. The tool to be developed will be based on satellite imagery as regards land use and soil moisture. These tools will make it possible to follow the evolution of the needs of the cultures in time, but also to time, and also to predict the impact on water resources. According to the references consulted, if such a tool is used, it can reduce irrigation volumes by up to up to 40%, which represents more than 100 million m3 of savings per year for the Mitidja. This volume is equivalent to a medium-size dam.Keywords: optimal irrigation, soil moisture, smart irrigation, water management
Procedia PDF Downloads 108363 Social Media Governance in UK Higher Education Institutions
Authors: Rebecca Lees, Deborah Anderson
Abstract:
Whilst the majority of research into social media in education focuses on the applications for teaching and learning environments, this study looks at how such activities can be managed by investigating the current state of social media regulation within UK higher education. Social media has pervaded almost all aspects of higher education; from marketing, recruitment and alumni relations to both distance and classroom-based learning and teaching activities. In terms of who uses it and how it is used, social media is growing at an unprecedented rate, particularly amongst the target market for higher education. Whilst the platform presents opportunities not found in more traditional methods of communication and interaction, such as speed and reach, it also carries substantial risks that come with inappropriate use, lack of control and issues of privacy. Typically, organisations rely on the concept of a social contract to guide employee behaviour to conform to the expectations of that organisation. Yet, where academia and social media intersect applying the notion of a social contract to enforce governance may be problematic; firstly considering the emphasis on treating students as customers with a growing focus on the use and collection of satisfaction metrics; and secondly regarding the notion of academic’s freedom of speech, opinion and discussion, which is a long-held tradition of learning instruction. Therefore the need for sound governance procedures to support expectations over online behaviour is vital, especially when the speed and breadth of adoption of social media activities has in the past outrun organisations’ abilities to manage it. An analysis of the current level of governance was conducted by gathering relevant policies, guidelines and best practice documentation available online via internet search and institutional requests. The documents were then subjected to a content analysis in the second phase of this study to determine the approach taken by institutions to apply such governance. Documentation was separated according to audience, i.e.: applicable to staff, students or all users. Given many of these included guests and visitors to the institution within their scope being easily accessible was considered important. Yet, within the UK only about half of all education institutions had explicit social media governance documentation available online without requiring member access or considerable searching. Where they existed, the majority focused solely on employee activities and tended to be policy based rather than rooted in guidelines or best practices, or held a fallback position of governing online behaviour via implicit instructions within IT and computer regulations. Explicit instructions over expected online behaviours is therefore lacking within UK HE. Given the number of educational practices that now include significant online components, it is imperative that education organisations keep up to date with the progress of social media use. Initial results from the second phase of this study which analyses the content of the governance documentation suggests they require reading levels at or above the target audience, with some considerable variability in length and layout. Further analysis will add to this growing field of investigating social media governance within higher education.Keywords: governance, higher education, policy, social media
Procedia PDF Downloads 181362 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure
Authors: Esra Zengin, Sinan Akkar
Abstract:
Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.Keywords: ground motion selection, scaling, uncertainty, fragility curve
Procedia PDF Downloads 582361 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions
Authors: Oscar E. Cariceo, Claudia V. Casal
Abstract:
Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.Keywords: cyberbullying, evidence based practice, machine learning, social work research
Procedia PDF Downloads 167360 Unleashing the Power of Cerebrospinal System for a Better Computer Architecture
Authors: Lakshmi N. Reddi, Akanksha Varma Sagi
Abstract:
Studies on biomimetics are largely developed, deriving inspiration from natural processes in our objective world to develop novel technologies. Recent studies are diverse in nature, making their categorization quite challenging. Based on an exhaustive survey, we developed categorizations based on either the essential elements of nature - air, water, land, fire, and space, or on form/shape, functionality, and process. Such diverse studies as aircraft wings inspired by bird wings, a self-cleaning coating inspired by a lotus petal, wetsuits inspired by beaver fur, and search algorithms inspired by arboreal ant path networks lend themselves to these categorizations. Our categorizations of biomimetic studies allowed us to define a different dimension of biomimetics. This new dimension is not restricted to inspiration from the objective world. It is based on the premise that the biological processes observed in the objective world find their reflections in our human bodies in a variety of ways. For example, the lungs provide the most efficient example for liquid-gas phase exchange, the heart exemplifies a very efficient pumping and circulatory system, and the kidneys epitomize the most effective cleaning system. The main focus of this paper is to bring out the magnificence of the cerebro-spinal system (CSS) insofar as it relates to our current computer architecture. In particular, the paper uses four key measures to analyze the differences between CSS and human- engineered computational systems. These are adaptability, sustainability, energy efficiency, and resilience. We found that the cerebrospinal system reveals some important challenges in the development and evolution of our current computer architectures. In particular, the myriad ways in which the CSS is integrated with other systems/processes (circulatory, respiration, etc) offer useful insights on how the human-engineered computational systems could be made more sustainable, energy-efficient, resilient, and adaptable. In our paper, we highlight the energy consumption differences between CSS and our current computational designs. Apart from the obvious differences in materials used between the two, the systemic nature of how CSS functions provides clues to enhance life-cycles of our current computational systems. The rapid formation and changes in the physiology of dendritic spines and their synaptic plasticity causing memory changes (ex., long-term potentiation and long-term depression) allowed us to formulate differences in the adaptability and resilience of CSS. In addition, the CSS is sustained by integrative functions of various organs, and its robustness comes from its interdependence with the circulatory system. The paper documents and analyzes quantifiable differences between the two in terms of the four measures. Our analyses point out the possibilities in the development of computational systems that are more adaptable, sustainable, energy efficient, and resilient. It concludes with the potential approaches for technological advancement through creation of more interconnected and interdependent systems to replicate the effective operation of cerebro-spinal system.Keywords: cerebrospinal system, computer architecture, adaptability, sustainability, resilience, energy efficiency
Procedia PDF Downloads 96359 Modern Architecture and the Scientific World Conception
Authors: Sean Griffiths
Abstract:
Introduction: This paper examines the expression of ‘objectivity’ in architecture in the context of the post-war rejection of this concept. It aims to re-examine the question in light of the assault on truth characterizing contemporary culture and of the unassailable truth of the climate emergency. The paper analyses the search for objective truth as it was prosecuted in the Modern Movement in the early 20th century, looking at the extent to which this quest was successful in contributing to the development of a radically new, politically-informed architecture and the extent to which its particular interpretation of objectivity, limited that development. The paper studies the influence of the Vienna Circle philosophers Rudolph Carnap and Otto Neurath on the pedagogy of the Bauhaus and the architecture of the Neue Sachlichkeit in Germany. Their logical positivism sought to determine objective truths through empirical analysis, expressed in an austere formal language as part of a ‘scientific world conception’ which would overcome metaphysics and unverifiable mystification. These ideas, and the concurrent prioritizing of measurement as the determinant of environmental quality, became key influences in the socially-driven architecture constructed in the 1920s and 30s by Bauhaus architects in numerous German Cities. Methodology: The paper reviews the history of the early Modern Movement and summarizes accounts of the relationship between the Vienna Circle and the Bauhaus. It looks at key differences in the approaches Neurath and Carnap took to the achievement of their shared philosophical and political aims. It analyses how the adoption of Carnap’s foundationalism influenced the architectural language of modern architecture and compares, through a close reading of the structure of Neurath’s ‘protocol sentences,’ the latter’s alternative approach, speculating on the possibility that its adoption offered a different direction of travel for Modern Architecture. Findings: The paper finds that the adoption of Carnap’s foundationalism, while helping Modern Architecture forge a new visual language, ultimately limited its development and is implicated in its failure to escape the very metaphysics against which it had set itself. It speculates that Neurath’s relational language-based approach to the issue of establishing objectivity has its architectural corollary in the process of revision and renovation that offers new ways an ‘objective’ language of architecture might be developed in a manner that is more responsive to our present-day crisis. Conclusion: The philosophical principles of the Vienna Circle and the architects of the Modern Movement had much in common. Both contributed to radical historical departures which sought to instantiate a world scientific conception in their respective fields, which would attempt to banish mystification and metaphysics and would align itself with socialism. However, in adopting Carnap’s foundationalism as the theoretical basis for the new architecture, Modern Architecture not only failed to escape metaphysics but arguably closed off new avenues of development to itself. The adoption of Neurath’s more open-ended and interactive approach to objectivity offers possibilities for new conceptions of the expression of objectivity in architecture that might be more tailored to the multiple crises we face today.Keywords: Bauhaus, logical positivism, Neue Sachlichkeit, rationalism, Vienna Circle
Procedia PDF Downloads 84358 Physiological Assessment for Straightforward Symptom Identification (PASSify): An Oral Diagnostic Device for Infants
Authors: Kathryn Rooney, Kaitlyn Eddy, Evan Landers, Weihui Li
Abstract:
The international mortality rate for neonates and infants has been declining at a disproportionally low rate when compared to the overall decline in child mortality in recent decades. A significant portion of infant deaths could be prevented with the implementation of low-cost and easy to use physiological monitoring devices, by enabling early identification of symptoms before they progress into life-threatening illnesses. The oral diagnostic device discussed in this paper serves to continuously monitor the key vital signs of body temperature, respiratory rate, heart rate, and oxygen saturation. The device mimics an infant pacifier, designed to be easily tolerated by infants as well as orthodontically inert. The fundamental measurements are gathered via thermistors and a pulse oximeter, each encapsulated in medical-grade silicone and wired internally to a microcontroller chip. The chip then translates the raw measurements into physiological values via an internal algorithm, before outputting the data to a liquid crystal display screen and an Android application. Additionally, a biological sample collection chamber is incorporated into the internal portion of the device. The movement within the oral chamber created by sucking on the pacifier-like device pushes saliva through a small check valve in the distal end, where it is accumulated and stored. The collection chamber can be easily removed, making the sample readily available to be tested for various diseases and analytes. With the vital sign monitoring and sample collection offered by this device, abnormal fluctuations in physiological parameters can be identified and appropriate medical care can be sought. This device enables preventative diagnosis for infants who may otherwise have gone undiagnosed, due to the inaccessibility of healthcare that plagues vast numbers of underprivileged populations.Keywords: neonate mortality, infant mortality, low-cost diagnostics, vital signs, saliva testing, preventative care
Procedia PDF Downloads 151357 Neurofeedback for Anorexia-RelaxNeuron-Aimed in Dissolving the Root Neuronal Cause
Authors: Kana Matsuyanagi
Abstract:
Anorexia Nervosa (AN) is a psychiatric disorder characterized by a relentless pursuit of thinness and strict restriction of food. The current therapeutic approaches for AN predominantly revolve around outpatient psychotherapies, which create significant financial barriers for the majority of affected patients, hindering their access to treatment. Nonetheless, AN exhibit one of the highest mortality and relapse rates among psychological disorders, underscoring the urgent need to provide patients with an affordable self-treatment tool, enabling those unable to access conventional medical intervention to address their condition autonomously. To this end, a neurofeedback software, termed RelaxNeuron, was developed with the objective of providing an economical and portable means to aid individuals in self-managing AN. Electroencephalography (EEG) was chosen as the preferred modality for RelaxNeuron, as it aligns with the study's goal of supplying a cost-effective and convenient solution for addressing AN. The primary aim of the software is to ameliorate the negative emotional responses towards food stimuli and the accompanying aberrant eye-tracking patterns observed in AN patient, ultimately alleviating the profound fear towards food an elemental symptom and, conceivably, the fundamental etiology of AN. The core functionality of RelaxNeuron hinges on the acquisition and analysis of EEG signals, alongside an electrocardiogram (ECG) signal, to infer the user's emotional state while viewing dynamic food-related imagery on the screen. Moreover, the software quantifies the user's performance in accurately tracking the moving food image. Subsequently, these two parameters undergo further processing in the subsequent algorithm, informing the delivery of either negative or positive feedback to the user. Preliminary test results have exhibited promising outcomes, suggesting the potential advantages of employing RelaxNeuron in the treatment of AN, as evidenced by its capacity to enhance emotional regulation and attentional processing through repetitive and persistent therapeutic interventions.Keywords: Anorexia Nervosa, fear conditioning, neurofeedback, BCI
Procedia PDF Downloads 42356 Offline Parameter Identification and State-of-Charge Estimation for Healthy and Aged Electric Vehicle Batteries Based on the Combined Model
Authors: Xiaowei Zhang, Min Xu, Saeid Habibi, Fengjun Yan, Ryan Ahmed
Abstract:
Recently, Electric Vehicles (EVs) have received extensive consideration since they offer a more sustainable and greener transportation alternative compared to fossil-fuel propelled vehicles. Lithium-Ion (Li-ion) batteries are increasingly being deployed in EVs because of their high energy density, high cell-level voltage, and low rate of self-discharge. Since Li-ion batteries represent the most expensive component in the EV powertrain, accurate monitoring and control strategies must be executed to ensure their prolonged lifespan. The Battery Management System (BMS) has to accurately estimate parameters such as the battery State-of-Charge (SOC), State-of-Health (SOH), and Remaining Useful Life (RUL). In order for the BMS to estimate these parameters, an accurate and control-oriented battery model has to work collaboratively with a robust state and parameter estimation strategy. Since battery physical parameters, such as the internal resistance and diffusion coefficient change depending on the battery state-of-life (SOL), the BMS has to be adaptive to accommodate for this change. In this paper, an extensive battery aging study has been conducted over 12-months period on 5.4 Ah, 3.7 V Lithium polymer cells. Instead of using fixed charging/discharging aging cycles at fixed C-rate, a set of real-world driving scenarios have been used to age the cells. The test has been interrupted every 5% capacity degradation by a set of reference performance tests to assess the battery degradation and track model parameters. As battery ages, the combined model parameters are optimized and tracked in an offline mode over the entire batteries lifespan. Based on the optimized model, a state and parameter estimation strategy based on the Extended Kalman Filter (EKF) and the relatively new Smooth Variable Structure Filter (SVSF) have been applied to estimate the SOC at various states of life.Keywords: lithium-ion batteries, genetic algorithm optimization, battery aging test, parameter identification
Procedia PDF Downloads 265355 Diagnosis of Choledocholithiasis with Endosonography
Authors: A. Kachmazova, A. Shadiev, Y. Teterin, P. Yartcev
Abstract:
Introduction: Biliary calculi disease (LCS) still occupies the leading position among urgent diseases of the abdominal cavity, manifesting itself from asymptomatic course to life-threatening states. Nowadays arsenal of diagnostic methods for choledocholithiasis is quite wide: ultrasound, hepatobiliscintigraphy (HBSG), magnetic resonance imaging (MRI), endoscopic retrograde cholangiography (ERCP). Among them, transabdominal ultrasound (TA ultrasound) is the most accessible and routine diagnostic method. Nowadays ERCG is the "gold" standard in diagnosis and one-stage treatment of biliary tract obstruction. However, transpapillary techniques are accompanied by serious postoperative complications (postmanipulative pancreatitis (3-5%), endoscopic papillosphincterotomy bleeding (2%), cholangitis (1%)), the lethality being 0.4%. GBSG and MRI are also quite informative methods in the diagnosis of choledocholithiasis. Small size of concrements, their localization in intrapancreatic and retroduodenal part of common bile duct significantly reduces informativity of all diagnostic methods described above, that demands additional studying of this problem. Materials and Methods: 890 patients with the diagnosis of cholelithiasis (calculous cholecystitis) were admitted to the Sklifosovsky Scientific Research Institute of Hospital Medicine in the period from August, 2020 to June, 2021. Of them 115 people with mechanical jaundice caused by concrements in bile ducts. Results: Final EUS diagnosis was made in all patients (100,0%). In all patients in whom choledocholithiasis diagnosis was revealed or confirmed after EUS, ERCP was performed urgently (within two days from the moment of its detection) as the X-ray operation room was provided; it confirmed the presence of concrements. All stones were removed by lithoextraction using Dormia basket. The postoperative period in these patients had no complications. Conclusions: EUS is the most informative and safe diagnostic method, which allows to detect choledocholithiasis in patients with discrepancies between clinical-laboratory and instrumental methods of diagnosis in shortest time, that in its turn will help to decide promptly on the further tactics of patient treatment. We consider it reasonable to include EUS in the diagnostic algorithm for choledocholithiasis. Disclosure: Nothing to disclose.Keywords: endoscopic ultrasonography, choledocholithiasis, common bile duct, concrement, ERCP
Procedia PDF Downloads 84354 Authenticity from the Perspective of Locals: What Prince Edward Islanders Had to Say about Authentic Tourism Experiences
Authors: Susan C. Graham
Abstract:
Authenticity has grown to be ubiquitous within the tourism vernacular. Yet, agreement regarding what authenticity means in relation to tourism remains nebulous. In its simplest form, authenticity in tourism refers to products and experiences that provide insights into the social, cultural, economic, natural, historical, and political life of a place. But this definition is unwieldy in its scope and may not help industry leaders nor tourist in identifying that which is authentic. Much of what is projected as authentic is a carefully curated and crafted message developed by marketers to appeal to visitors and bears little resemblance to the everyday lives of locals. So perhaps one way to identify authentic tourism experiences is to ask locals themselves. The purpose of this study was to explore the perspectives of locals with respect to what constituted an authentic tourism experience in Prince Edward Island (PEI), Canada. Over 600 volunteers in a tourism research panel were sent a survey asking them to describe authentic PEI experiences within ten sub-categories relevant to the local tourism industry. To make participation more manageable, each respondent was asked their perspectives on any three of the tourism sub-categories. Over 400 individuals responded, providing 1391 unique responses. The responses were grouped thematically using interpretive phenomenological analysis whereby the participants’ responses were clustered into higher order groups to extract meaning. Two interesting thematic observations emerged: first, that respondents tended to clearly articulate and differentiate between intra- versus interpersonal experiences as a means of authentically experiencing PEI; and second, while respondents explicitly valued unstaged experiences over staged, several exceptions to this general rule were expressed. Responses could clearly be grouped into those that emphasized “going off the beaten path,” “exploring pristine and untouched corners,” “lesser known,” “hidden”, “going solo,” and taking the opportunity to “slow down.” Each of these responses was “self” centered, and focused on the visitor discovering and exploring in search of greater self-awareness and inner peace. In contrast, other responses encouraged the interaction of visitors with locals as a means of experiencing the authentic place. Respondents sited “going deep-sea fishing” to learn about local fishers and their communities, stopping by “local farm stands” and speaking with farmers who worked the land for generations,” patronizing “local restaurants, pubs, and b&bs”, and partaking in performances or exhibits by local artists. These kinds of experiences, the respondents claimed, provide an authentic glimpse into a place’s character. The second set of observations focused on the distinction between staged and unstaged experiences, with respondents overwhelmingly advocating for unstaged. Responses were clear in shunning “touristy,” “packaged,” and “fake” offerings for being inauthentic and misrepresenting the place as locals view it. Yet many respondents made exceptions for certain “staged” experiences, including (quite literally) the stage production of Anne of Green Gables based on the novel of the same name, the theatrical re-enactment of the founding of Canada, and visits to PEI’s many provincial and national parks, all of which respondents considered both staged and authentic at the same time.Keywords: authentic, local, Prince Edward Island, tourism
Procedia PDF Downloads 263353 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models
Authors: Ravi Ande, Mousumi Hazari
Abstract:
One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine
Procedia PDF Downloads 91352 Characterization of Forest Fire Fuel in Shivalik Himalayas Using Hyperspectral Remote Sensing
Authors: Neha Devi, P. K. Joshi
Abstract:
Fire fuel map is one of the most critical factors for planning and managing the fire hazard and risk. One of the most significant forms of global disturbance, impacting community dynamics, biogeochemical cycles and local and regional climate across a wide range of ecosystems ranging from boreal forests to tropical rainforest is wildfire Assessment of fire danger is a function of forest type, fuelwood stock volume, moisture content, degree of senescence and fire management strategy adopted in the ground. Remote sensing has potential of reduction the uncertainty in mapping fuels. Hyperspectral remote sensing is emerging to be a very promising technology for wildfire fuels characterization. Fine spectral information also facilitates mapping of biophysical and chemical information that is directly related to the quality of forest fire fuels including above ground live biomass, canopy moisture, etc. We used Hyperion imagery acquired in February, 2016 and analysed four fuel characteristics using Hyperion sensor data on-board EO-1 satellite, acquired over the Shiwalik Himalayas covering the area of Champawat, Uttarakhand state. The main objective of this study was to present an overview of methodologies for mapping fuel properties using hyperspectral remote sensing data. Fuel characteristics analysed include fuel biomass, fuel moisture, and fuel condition and fuel type. Fuel moisture and fuel biomass were assessed through the expression of the liquid water bands. Fuel condition and type was assessed using green vegetation, non-photosynthetic vegetation and soil as Endmember for spectral mixture analysis. Linear Spectral Unmixing, a partial spectral unmixing algorithm, was used to identify the spectral abundance of green vegetation, non-photosynthetic vegetation and soil.Keywords: forest fire fuel, Hyperion, hyperspectral, linear spectral unmixing, spectral mixture analysis
Procedia PDF Downloads 160351 An Interdisciplinary Approach to Investigating Style: A Case Study of a Chinese Translation of Gilbert’s (2006) Eat Pray Love
Authors: Elaine Y. L. Ng
Abstract:
Elizabeth Gilbert’s (2006) biography Eat, Pray, Love describes her travels to Italy, India, and Indonesia after a painful divorce. The author’s experiences with love, loss, search for happiness, and meaning have resonated with a huge readership. As regards the translation of Gilbert’s (2006) Eat, Pray, Love into Chinese, it was first translated by a Taiwanese translator He Pei-Hua and published in Taiwan in 2007 by Make Boluo Wenhua Chubanshe with the fairly catching title “Enjoy! Traveling Alone.” The same translation was translocated to China, republished in simplified Chinese characters by Shanxi Shifan Daxue Chubanshe in 2008 and renamed in China, entitled “To Be a Girl for the Whole Life.” Later on, the same translation in simplified Chinese characters was reprinted by Hunan Wenyi Chubanshe in 2013. This study employs Munday’s (2002) systemic model for descriptive translation studies to investigate the translation of Gilbert’s (2006) Eat, Pray, Love into Chinese by the Taiwanese translator Hu Pei-Hua. It employs an interdisciplinary approach, combining systemic functional linguistics and corpus stylistics with sociohistorical research within a descriptive framework to study the translator’s discursive presence in the text. The research consists of three phases. The first phase is to locate the target text within its socio-cultural context. The target-text context concerning the para-texts, readers’ responses, and the publishers’ orientation will be explored. The second phase is to compare the source text and the target text for the categorization of translation shifts by using the methodological tools of systemic functional linguistics and corpus stylistics. The investigation concerns the rendering of mental clauses and speech and thought presentation. The final phase is an explanation of the causes of translation shifts. The linguistic findings are related to the extra-textual information collected in an effort to ascertain the motivations behind the translator’s choices. There exist sets of possible factors that may have contributed to shaping the textual features of the given translation within a specific socio-cultural context. The study finds that the translator generally reproduces the mental clauses and speech and thought presentation closely according to the original. Nevertheless, the language of the translation has been widely criticized to be unidiomatic and stiff, losing the elegance of the original. In addition, the several Chinese translations of the given text produced by one Taiwanese and two Chinese publishers are basically the same. They are repackaged slightly differently, mainly with the change of the book cover and its captions for each version. By relating the textual findings to the extra-textual data of the study, it is argued that the popularity of the Chinese translation of Gilbert’s (2006) Eat, Pray, Love may not be attributed to the quality of the translation. Instead, it may have to do with the way the work is promoted strategically by the social media manipulated by the four e-bookstores promoting and selling the book online in China.Keywords: chinese translation of eat pray love, corpus stylistics, motivations for translation shifts, systemic approach to translation studies
Procedia PDF Downloads 173350 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 131349 Image Processing-Based Maize Disease Detection Using Mobile Application
Authors: Nathenal Thomas
Abstract:
In the food chain and in many other agricultural products, corn, also known as maize, which goes by the scientific name Zea mays subsp, is a widely produced agricultural product. Corn has the highest adaptability. It comes in many different types, is employed in many different industrial processes, and is more adaptable to different agro-climatic situations. In Ethiopia, maize is among the most widely grown crop. Small-scale corn farming may be a household's only source of food in developing nations like Ethiopia. The aforementioned data demonstrates that the country's requirement for this crop is excessively high, and conversely, the crop's productivity is very low for a variety of reasons. The most damaging disease that greatly contributes to this imbalance between the crop's supply and demand is the corn disease. The failure to diagnose diseases in maize plant until they are too late is one of the most important factors influencing crop output in Ethiopia. This study will aid in the early detection of such diseases and support farmers during the cultivation process, directly affecting the amount of maize produced. The diseases in maize plants, such as northern leaf blight and cercospora leaf spot, have distinct symptoms that are visible. This study aims to detect the most frequent and degrading maize diseases using the most efficiently used subset of machine learning technology, deep learning so, called Image Processing. Deep learning uses networks that can be trained from unlabeled data without supervision (unsupervised). It is a feature that simulates the exercises the human brain goes through when digesting data. Its applications include speech recognition, language translation, object classification, and decision-making. Convolutional Neural Network (CNN) for Image Processing, also known as convent, is a deep learning class that is widely used for image classification, image detection, face recognition, and other problems. it will also use this algorithm as the state-of-the-art for my research to detect maize diseases by photographing maize leaves using a mobile phone.Keywords: CNN, zea mays subsp, leaf blight, cercospora leaf spot
Procedia PDF Downloads 74348 Effect of Varied Climate, Landuse and Human Activities on the Termite (Isoptera: Insecta) Diversity in Three Different Habitats of Shivamogga District, Karnataka, India
Authors: C. M. Kalleshwaraswamy, G. S. Sathisha, A. S. Vidyashree, H. B. Pavithra
Abstract:
Isoptera are an interesting group of social insects with different castes and division of labour. They are primarily wood-feeders, but also feed on a variety of other organic substrates, such as living trees, leaf litter, soil, lichens and animal faeces. The number of species and their biomass are especially large in tropics. In natural ecosystems, they perform a beneficial role in nutrient cycles by accelerating decomposition. The magnitude and dimension of ecological role played by termites is a function of their diversity, population density, and biomass. Termite assemblage composition has a strong response to habitat disturbance and may be indicative of quantitative changes in the decomposition process. Many previous studies in Western Ghat region of India suggest increased anthropogenic activities that adversely affect the soil macrofauna and diversity. Shivamogga district provides a good opportunity to study the effect of topography, cropping pattern, human disturbance on the termite fauna, thereby acquiring accurate baseline information for conservation decision making. The district has 3 distinct agro-ecological areas such as maidan area, semi-malnad and Western Ghat region. Thus, the district provides a unique opportunity to study the effect of varied climate and anthropogenic disturbance on the termite diversity. The standard protocol of belt transects method developed by Eggleton et al. (1997) was used for sampling termites. Sampling was done at monthly interval from September-2014 to August-2015 in Western Ghats, semi-malnad and maidan habitats. The transect was 100m long and 2m wide and divided into 20 contiguous sections, each 5 x 2m in each habitat. Within each section, all the probable microhabitats of termites were searched, which include dead logs, fallen tree, branch, sticks, leaf litter, vegetation etc.,. All the castes collected were labelled, preserved in 80% alcohol, counted and identified to species level. The number of encounters of a species in the transect was used as an indicator of relative abundance of species. The species diversity, species richness, density were compared in three different habitats such as Western Ghats, semi-malnad and maidan region. The study indicated differences in the species composition in the three different habitats. A total of 15 species were recorded which belonging to four sub family and five genera in three habitats. Eleven species viz., Odontotermes obesus, O. feae, O. anamallensis, O. bellahunisensis, O. adampurensis, O. boveni, Microcerotermes fletcheri, M. pakistanicus, Nasutitermes anamalaiensis, N. indicola, N. krishna were recorded in Western Ghat region. Similarly, 11 species viz., Odontotermes obesus, O. feae, O. anamallensis, O. bellahunisensis, O. hornii, O. bhagwathi, Microtermes obesi, Microcerotermes fletcheri, M. pakistanicus, Nasutitermes indicola and Pericapritermes sp. were recorded in semi-malnad habitat. However, only four species viz., O. obesus, O. feae, Microtemes obesi and Pericapritermes sp. species were recorded in maidan area. Shannon’s wiener diversity index (H) showed that Western Ghats had more species dominance (1.56) followed by semi- malnad (1.36) and lowest in maidan (0.89) habitats. Highest value of simpson’s index (D) was observed in Western Ghats habitat (0.70) with more diverse species followed by semi-malnad (0.58) and lowest in maidan (0.53). Similarly, evenness was highest (0.65) in Western Ghats followed by maidan (0.64) and least in semi-malnad habitat (0.54). Menhinick’s index (Dmn) value was ranging from 0.03 to 0.06 in different habitats in the study area. Highest index was observed in Western Ghats (0.06) followed by semi-malnad (0.05) and lowest in maidan (0.03). The study conclusively demonstrated that Western Ghat had highest species diversity compared to semi-malnad and maidan habitat indicating these two habitats are continuously subjected to anthropogenic disturbances. Efforts are needed to conserve the uncommon species which otherwise may become extinct due to human activities.Keywords: anthropogenic disturbance, isoptera, termite species diversity, Western ghats
Procedia PDF Downloads 266347 Ethical Decision-Making by Healthcare Professionals during Disasters: Izmir Province Case
Authors: Gulhan Sen
Abstract:
Disasters could result in many deaths and injuries. In these difficult times, accessible resources are limited, demand and supply balance is distorted, and there is a need to make urgent interventions. Disproportionateness between accessible resources and intervention capacity makes triage a necessity in every stage of disaster response. Healthcare professionals, who are in charge of triage, have to evaluate swiftly and make ethical decisions about which patients need priority and urgent intervention given the limited available resources. For such critical times in disaster triage, 'doing the greatest good for the greatest number of casualties' is adopted as a code of practice. But there is no guide for healthcare professionals about ethical decision-making during disasters, and this study is expected to use as a source in the preparation of the guide. This study aimed to examine whether the qualities healthcare professionals in Izmir related to disaster triage were adequate and whether these qualities influence their capacity to make ethical decisions. The researcher used a survey developed for data collection. The survey included two parts. In part one, 14 questions solicited information about socio-demographic characteristics and knowledge levels of the respondents on ethical principles of disaster triage and allocation of scarce resources. Part two included four disaster scenarios adopted from existing literature and respondents were asked to make ethical decisions in triage based on the provided scenarios. The survey was completed by 215 healthcare professional working in Emergency-Medical Stations, National Medical Rescue Teams and Search-Rescue-Health Teams in Izmir. The data was analyzed with SPSS software. Chi-Square Test, Mann-Whitney U Test, Kruskal-Wallis Test and Linear Regression Analysis were utilized. According to results, it was determined that 51.2% of the participants had inadequate knowledge level of ethical principles of disaster triage and allocation of scarce resources. It was also found that participants did not tend to make ethical decisions on four disaster scenarios which included ethical dilemmas. They stayed in ethical dilemmas that perform cardio-pulmonary resuscitation, manage limited resources and make decisions to die. Results also showed that participants who had more experience in disaster triage teams, were more likely to make ethical decisions on disaster triage than those with little or no experience in disaster triage teams(p < 0.01). Moreover, as their knowledge level of ethical principles of disaster triage and allocation of scarce resources increased, their tendency to make ethical decisions also increased(p < 0.001). In conclusion, having inadequate knowledge level of ethical principles and being inexperienced affect their ethical decision-making during disasters. So results of this study suggest that more training on disaster triage should be provided on the areas of the pre-impact phase of disaster. In addition, ethical dimension of disaster triage should be included in the syllabi of the ethics classes in the vocational training for healthcare professionals. Drill, simulations, and board exercises can be used to improve ethical decision making abilities of healthcare professionals. Disaster scenarios where ethical dilemmas are faced should be prepared for such applied training programs.Keywords: disaster triage, medical ethics, ethical principles of disaster triage, ethical decision-making
Procedia PDF Downloads 244346 Transforming Data Science Curriculum Through Design Thinking
Authors: Samar Swaid
Abstract:
Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.Keywords: data science, design thinking, AI, currculum, transformation
Procedia PDF Downloads 79345 Optimization Of Biogas Production Using Co-digestion Feedstocks Via Anaerobic Technologhy
Authors: E Tolufase
Abstract:
The demand, high costs and health implications of using energy derived from hydrocarbon compound have necessitated the continuous search for alternative source of energy. The World energy market is facing some challenges viz: depletion of fossil fuel reserves, population explosion, lack of energy security, economic and urbanization growth and also, in Nigeria some rural areas still depend largely on wood, charcoal, kerosene, petrol among others, as the sources of their energy. To overcome these short falls in energy supply and demand, as well as taking into consideration the risks from global climate change due to effect of greenhouse gas emissions and other pollutants from fossil fuels’ combustion, brought a lot of attention on efficiently harnessing the renewable energy sources. A very promising among the renewable energy resources for a clean energy technology for power production, vehicle and domestic usage is biogas. Therefore, optimization of biogas yield and quality is imperative. Hence, this study investigated yield and quality of biogas using low cost bio-digester and combination of various feed stocks referred to as co-digestion. Batch/Discontinuous Bio-digester type was used because it was cheap, easy, plausible and appropriate for different substrates used to get the desired results. Three substrates were used; cow dung, chicken droppings and lemon grass digested in five separate 21 litre digesters, A, B, C, D, and E and the gas collection system was designed using locally available materials. For single digestion we had; cow dung, chicken droppings, lemon grass, in Bio-digesters A, B, and C respectively, the co-digested three substrates in different mixed ratio 7:1:2 in digester D and E in ratio 5:3:2. The respective feed-stocks materials were collected locally, digested and analyzed in accordance with standard procedures. They were pre-fermented for a period of 10 days before being introduced into the digesters. They were digested for a retention period of 28 days, the physiochemical parameters namely; pressure, temperature, pH, volume of the gas collector system and volume of biogas produced were all closely monitored and recorded daily. The values of pH and temperature ranged 6.0 - 8.0, and 220C- 350C respectively. For the single substrate, bio-digester A(Cow dung only) produced biogas of total volume 0.1607m3(average volume of 0.0054m3 daily),while B (Chicken droppings ) produced 0.1722m3 (average of 0.0057m3 daily) and C (lemon grass) produced 0.1035m3 (average of 0.0035m3 daily). For the co-digested substrates in bio-digester D the total biogas produced was 0.2007m³ (average volume of 0.0067m³ daily) and bio-digester E produced 0.1991m³ (average volume of 0.0066m³ daily) It’s obvious from the results, that combining different substrates gave higher yields than when a singular feed stock was used and also mixing ratio played some roles in the yield improvement. Bio-digesters D and E contained the same substrates but mixed with different ratios, but higher yield was noticed in D with mixing ratio of 7:1:2 than in E with ratio 5:3:2.Therefore, co-digestion of substrates and mixing proportions are important factors for biogas production optimization.Keywords: anaerobic, batch, biogas, biodigester, digestion, fermentation, optimization
Procedia PDF Downloads 27344 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer
Authors: Binder Hans
Abstract:
Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas
Procedia PDF Downloads 148343 Delineating Floodplain along the Nasia River in Northern Ghana Using HAND Contour
Authors: Benjamin K. Ghansah, Richard K. Appoh, Iliya Nababa, Eric K. Forkuo
Abstract:
The Nasia River is an important source of water for domestic and agricultural purposes to the inhabitants of its catchment. Major farming activities takes place within the floodplain of the river and its network of tributaries. The actual inundation extent of the river system is; however, unknown. Reasons for this lack of information include financial constraints and inadequate human resources as flood modelling is becoming increasingly complex by the day. Knowledge of the inundation extent will help in the assessment of risk posed by the annual flooding of the river, and help in the planning of flood recession agricultural activities. This study used a simple terrain based algorithm, Height Above Nearest Drainage (HAND), to delineate the floodplain of the Nasia River and its tributaries. The HAND model is a drainage normalized digital elevation model, which has its height reference based on the local drainage systems rather than the average mean sea level (AMSL). The underlying principle guiding the development of the HAND model is that hillslope flow paths behave differently when the reference gradient is to the local drainage network as compared to the seaward gradient. The new terrain model of the catchment was created using the NASA’s SRTM Digital Elevation Model (DEM) 30m as the only data input. Contours (HAND Contour) were then generated from the normalized DEM. Based on field flood inundation survey, historical information of flooding of the area as well as satellite images, a HAND Contour of 2m was found to best correlates with the flood inundation extent of the river and its tributaries. A percentage accuracy of 75% was obtained when the surface area created by the 2m contour was compared with surface area of the floodplain computed from a satellite image captured during the peak flooding season in September 2016. It was estimated that the flooding of the Nasia River and its tributaries created a floodplain area of 1011 km².Keywords: digital elevation model, floodplain, HAND contour, inundation extent, Nasia River
Procedia PDF Downloads 454