Search results for: automatic mapping
592 Flood Inundation Mapping at Wuseta River, East Gojjam Zone, Amhara Regional State, Ethiopia
Authors: Arega Mulu
Abstract:
Flood is a usual phenomenon that will continue to be a leading risk as extensive as societies living and effort in flood-disposed areas. It happens when the size of rainwater in a stream surpasses the volume of the canal. In Ethiopia, municipal overflow events are suitable for severe difficulty in current years. This overflow is mainly related to poorly planned city drainage schemes and land use design. Collective with it, the absence of detailed flood levels, the absence of an early caution scheme and systematized flood catastrophe alleviation actions at countrywide and local levels further raise the gravity of the problem. Hence, this study produces flood inundation maps in the Wuseta River using HEC-GeoRAS and HEC-RAS models. The flooded areas along the Wuseta River have been plotted based on different return periods. The highest flows for various return periods were assessed using the HEC-RAS model, GIS for spatial data processing, and HEC-GeoRAS for interfacing among HEC-RAS and GIS. The areas along the Wuseta River simulated to be flooded for 5, 10, 25, 50, and 100-year return periods. For a 100-year return period flood frequency, the maximum flood depth was 2.26m, and the maximum width was 0.3km on each riverside. This maximum Depth of flood was extended from near to the journey from the university to Debre Markos Town. Most of the area was affected near the Wuseta market to Abaykunu new bridge, and a small portion was affected from Abaykunu to the road crossing from Addis Ababa to Debre Markos Town. The outcome of this study will help the concerned bodies frame and advance policies according to the existing flood risk in the area.Keywords: flood innundation, wuseta river, HEC-HMS, HEC-RAS
Procedia PDF Downloads 8591 Bridge Members Segmentation Algorithm of Terrestrial Laser Scanner Point Clouds Using Fuzzy Clustering Method
Authors: Donghwan Lee, Gichun Cha, Jooyoung Park, Junkyeong Kim, Seunghee Park
Abstract:
3D shape models of the existing structure are required for many purposes such as safety and operation management. The traditional 3D modeling methods are based on manual or semi-automatic reconstruction from close-range images. It occasions great expense and time consuming. The Terrestrial Laser Scanner (TLS) is a common survey technique to measure quickly and accurately a 3D shape model. This TLS is used to a construction site and cultural heritage management. However there are many limits to process a TLS point cloud, because the raw point cloud is massive volume data. So the capability of carrying out useful analyses is also limited with unstructured 3-D point. Thus, segmentation becomes an essential step whenever grouping of points with common attributes is required. In this paper, members segmentation algorithm was presented to separate a raw point cloud which includes only 3D coordinates. This paper presents a clustering approach based on a fuzzy method for this objective. The Fuzzy C-Means (FCM) is reviewed and used in combination with a similarity-driven cluster merging method. It is applied to the point cloud acquired with Lecia Scan Station C10/C5 at the test bed. The test-bed was a bridge which connects between 1st and 2nd engineering building in Sungkyunkwan University in Korea. It is about 32m long and 2m wide. This bridge was used as pedestrian between two buildings. The 3D point cloud of the test-bed was constructed by a measurement of the TLS. This data was divided by segmentation algorithm for each member. Experimental analyses of the results from the proposed unsupervised segmentation process are shown to be promising. It can be processed to manage configuration each member, because of the segmentation process of point cloud.Keywords: fuzzy c-means (FCM), point cloud, segmentation, terrestrial laser scanner (TLS)
Procedia PDF Downloads 237590 Alignment and Antagonism in Flux: A Diachronic Sentiment Analysis of Attitudes towards the Chinese Mainland in the Hong Kong Press
Authors: William Feng, Qingyu Gao
Abstract:
Despite the extensive discussions about Hong Kong’s sentiments towards the Chinese Mainland since the sovereignty transfer in 1997, there has been no large-scale empirical analysis of the changing attitudes in the mainstream media, which both reflect and shape sentiments in the society. To address this gap, the present study uses an optimised semantic-based automatic sentiment analysis method to examine a corpus of news about China from 1997 to 2020 in three main Chinese-language newspapers in Hong Kong, namely Apple Daily, Ming Pao, and Oriental Daily News. The analysis shows that although the Hong Kong press had a positive emotional tone toward China in general, the overall trend of sentiment was becoming increasingly negative. Meanwhile, the alignment and antagonism toward China have both increased, providing empirical evidence of attitudinal polarisation in the Hong Kong society. Specifically, Apple Daily’s depictions of China have become increasingly negative, though with some positive turns before 2008, whilst Oriental Daily News has consistently expressed more favourable sentiments. Ming Pao maintained an impartial stance toward China through an increased but balanced representation of positive and negative sentiments, with its subjectivity and sentiment intensity growing to an industry-standard level. The results provide new insights into the complexity of sentiments towards China in the Hong Kong press and media attitudes in general in terms of the “us” and “them” positioning by explicating the cross-newspaper and cross-period variations using an enhanced sentiment analysis method which incorporates sentiment-oriented and semantic role analysis techniques.Keywords: media attitude, sentiment analysis, Hong Kong press, one country two systems
Procedia PDF Downloads 124589 Heavy Vehicle Traffic Estimation Using Automatic Traffic Recorders/Weigh-In-Motion Data: Current Practice and Proposed Methods
Authors: Muhammad Faizan Rehman Qureshi, Ahmed Al-Kaisy
Abstract:
Accurate estimation of traffic loads is critical for pavement and bridge design, among other transportation applications. Given the disproportional impact of heavier axle loads on pavement and bridge structures, truck and heavy vehicle traffic is expected to be a major determinant of traffic load estimation. Further, heavy vehicle traffic is also a major input in transportation planning and economic studies. The traditional method for estimating heavy vehicle traffic primarily relies on AADT estimation using Monthly Day of the Week (MDOW) adjustment factors as well as the percent heavy vehicles observed using statewide data collection programs. The MDOW factors are developed using daily and seasonal (or monthly) variation patterns for total traffic, consisting predominantly of passenger cars and other smaller vehicles. Therefore, while using these factors may yield reasonable estimates for total traffic (AADT), such estimates may involve a great deal of approximation when applied to heavy vehicle traffic. This research aims at assessing the approximation involved in estimating heavy vehicle traffic using MDOW adjustment factors for total traffic (conventional approach) along with three other methods of using MDOW adjustment factors for total trucks (class 5-13), combination-unit trucks (class 8-13), as well as adjustment factors for each vehicle class separately. Results clearly indicate that the conventional method was outperformed by the other three methods by a large margin. Further, using the most detailed and data intensive method (class-specific adjustment factors) does not necessarily yield a more accurate estimation of heavy vehicle traffic.Keywords: traffic loads, heavy vehicles, truck traffic, adjustment factors, traffic data collection
Procedia PDF Downloads 25588 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 317587 Importance-Performance Analysis of Volunteer Tourism in Ethiopia: Host and Guest Case Study
Authors: Zita Fomukong Andam
Abstract:
With a general objective of evaluating the importance and Performance attributes of Volunteer Tourism in Ethiopia and also specifically intending to rank out the importance to evaluate the competitive performance of Ethiopia to host volunteer tourists, laying them in a four quadrant grid and conduct the IPA Iso-Priority Line comparison of Volunteer Tourism in Ethiopia. From hosts and guests point of view, a deeper research discourse was conducted with a randomly selected 384 guests and 165 hosts in Ethiopia. Findings of the discourse through an exploratory research design on both the hosts and the guests confirm that attributes of volunteer tourism generally and marginally fall in the South East quadrant of the matrix where their importance is relatively higher than their performance counterpart, also referred as ‘Concentrate Here’ quadrant. The fact that there are more items in this particular place in both the host and guest study, where they are highly important, but their relative performance is low, strikes a message that the country has more to do. Another focus point of this study is mapping the scores of attributes regarding the guest and host importance and performance against the Iso-Priority Line. Results of Iso-Priority Line Analysis of the IPA of Volunteer Tourism in Ethiopia from the Host’s Perspective showed that there are no attributes where their importance is exactly the same as their performance. With this being found, the fact that this research design inhabits many characters of exploratory nature, it is not confirmed research output. This paper reserves from prescribing anything to the applied world before further confirmatory research is conducted on the issue and rather calls the scientific community to augment this study through comprehensive, exhaustive, extensive and extended works of inquiry in order to get a refined set of recommended items to the applied world.Keywords: volunteer tourism, competitive performance importance-performance analysis, Ethiopian tourism
Procedia PDF Downloads 235586 Mapping a Data Governance Framework to the Continuum of Care in the Active Assisted Living Context
Authors: Gaya Bin Noon, Thoko Hanjahanja-Phiri, Laura Xavier Fadrique, Plinio Pelegrini Morita, Hélène Vaillancourt, Jennifer Teague, Tania Donovska
Abstract:
Active Assisted Living (AAL) refers to systems designed to improve the quality of life, aid in independence, and create healthier lifestyles for care recipients. As the population ages, there is a pressing need for non-intrusive, continuous, adaptable, and reliable health monitoring tools to support aging in place. AAL has great potential to support these efforts with the wide variety of solutions currently available, but insufficient efforts have been made to address concerns arising from the integration of AAL into care. The purpose of this research was to (1) explore the integration of AAL technologies and data into the clinical pathway, and (2) map data access and governance for AAL technology in order to develop standards for use by policy-makers, technology manufacturers, and developers of smart communities for seniors. This was done through four successive research phases: (1) literature search to explore existing work in this area and identify lessons learned; (2) modeling of the continuum of care; (3) adapting a framework for data governance into the AAL context; and (4) interviews with stakeholders to explore the applicability of previous work. Opportunities for standards found in these research phases included a need for greater consistency in language and technology requirements, better role definition regarding who can access and who is responsible for taking action based on the gathered data, and understanding of the privacy-utility tradeoff inherent in using AAL technologies in care settings.Keywords: active assisted living, aging in place, internet of things, standards
Procedia PDF Downloads 133585 Application of Remote Sensing for Monitoring the Impact of Lapindo Mud Sedimentation for Mangrove Ecosystem, Case Study in Sidoarjo, East Java
Authors: Akbar Cahyadhi Pratama Putra, Tantri Utami Widhaningtyas, M. Randy Aswin
Abstract:
Indonesia as an archipelagic nation have very long coastline which have large potential marine resources, one of that is the mangrove ecosystems. Lapindo mudflow disaster in Sidoarjo, East Java requires mudflow flowed into the sea through the river Brantas and Porong. Mud material that transported by river flow is feared dangerous because they contain harmful substances such as heavy metals. This study aims to map the mangrove ecosystem seen from its density and knowing how big the impact of a disaster on the Lapindo mud to mangrove ecosystem and accompanied by efforts to address the mangrove ecosystem that maintained continuity. Mapping coastal mangrove conditions of Sidoarjo was done using remote sensing products that Landsat 7 ETM + images with dry months of recording time in 2002, 2006, 2009, and 2014. The density of mangrove detected using NDVI that uses the band 3 that is the red channel and band 4 that is near IR channel. Image processing was used to produce NDVI using ENVI 5.1 software. NDVI results were used for the detection of mangrove density is 0-1. The development of mangrove ecosystems of both area and density from year to year experienced has a significant increase. Mangrove ecosystems growths are affected by material deposition area of Lapindo mud on Porong and Brantas river estuary, where the silt is growing medium suitable mangrove ecosystem and increasingly growing. Increasing the density caused support by public awareness to prevent heavy metals in the material so that the Lapindo mud mangrove breeding done around the farm.Keywords: archipelagic nation, mangrove, Lapindo mudflow disaster, NDVI
Procedia PDF Downloads 439584 Facilitating Written Biology Assessment in Large-Enrollment Courses Using Machine Learning
Authors: Luanna B. Prevost, Kelli Carter, Margaurete Romero, Kirsti Martinez
Abstract:
Writing is an essential scientific practice, yet, in several countries, the increasing university science class-size limits the use of written assessments. Written assessments allow students to demonstrate their learning in their own words and permit the faculty to evaluate students’ understanding. However, the time and resources required to grade written assessments prohibit their use in large-enrollment science courses. This study examined the use of machine learning algorithms to automatically analyze student writing and provide timely feedback to the faculty about students' writing in biology. Written responses to questions about matter and energy transformation were collected from large-enrollment undergraduate introductory biology classrooms. Responses were analyzed using the LightSide text mining and classification software. Cohen’s Kappa was used to measure agreement between the LightSide models and human raters. Predictive models achieved agreement with human coding of 0.7 Cohen’s Kappa or greater. Models captured that when writing about matter-energy transformation at the ecosystem level, students focused on primarily on the concepts of heat loss, recycling of matter, and conservation of matter and energy. Models were also produced to capture writing about processes such as decomposition and biochemical cycling. The models created in this study can be used to provide automatic feedback about students understanding of these concepts to biology faculty who desire to use formative written assessments in larger enrollment biology classes, but do not have the time or personnel for manual grading.Keywords: machine learning, written assessment, biology education, text mining
Procedia PDF Downloads 281583 Text Analysis to Support Structuring and Modelling a Public Policy Problem-Outline of an Algorithm to Extract Inferences from Textual Data
Authors: Claudia Ehrentraut, Osama Ibrahim, Hercules Dalianis
Abstract:
Policy making situations are real-world problems that exhibit complexity in that they are composed of many interrelated problems and issues. To be effective, policies must holistically address the complexity of the situation rather than propose solutions to single problems. Formulating and understanding the situation and its complex dynamics, therefore, is a key to finding holistic solutions. Analysis of text based information on the policy problem, using Natural Language Processing (NLP) and Text analysis techniques, can support modelling of public policy problem situations in a more objective way based on domain experts knowledge and scientific evidence. The objective behind this study is to support modelling of public policy problem situations, using text analysis of verbal descriptions of the problem. We propose a formal methodology for analysis of qualitative data from multiple information sources on a policy problem to construct a causal diagram of the problem. The analysis process aims at identifying key variables, linking them by cause-effect relationships and mapping that structure into a graphical representation that is adequate for designing action alternatives, i.e., policy options. This study describes the outline of an algorithm used to automate the initial step of a larger methodological approach, which is so far done manually. In this initial step, inferences about key variables and their interrelationships are extracted from textual data to support a better problem structuring. A small prototype for this step is also presented.Keywords: public policy, problem structuring, qualitative analysis, natural language processing, algorithm, inference extraction
Procedia PDF Downloads 590582 Beyond Informality: Relocation from a Traditional Village 'Mit Oqbah' to Masaken El-Barageel and the Role of ‘Urf in Governing Built Environment, Egypt
Authors: Sarah Eldefrawi, Maike Didero
Abstract:
In Egypt, residents’ urban interventions (colloquially named A’hali’s interventions) are always tackled by government, scholars, and media as an encroachment (taeadiyat), chaotic (a’shwa’i) or informal (gheir mokanan) practices. This paper argues that those interventions cannot be simply described as an encroachment on public space or chaotic behaviour. We claim here that they are relevant to traditional governing methods (‘Urf) that were governing Arab cities for many decades. Through an in-depth field study conducted in a real estate public housing project in the city of Giza called 'Masaken El-Barageel', we traced the urban transformations demonstrated in private and public spaces. To understand those transformations, we used wide-range of qualitative research methods such as semi-guided and informal interviews, observations and mapping of the built environment and the newly added interventions. This study was as well strengthened through the contributions of the author in studying nine sectors emerging by Ahali in six districts in Great Cairo. The results of this study indicate that a culturally and socially sensitive framework has to be related to the individual actions toward the spatial and social structures as well as to culturally transmitted views and meanings connected with 'Urf'. The study could trace three crucial principals in ‘urf that influenced these interventions; the eliminating of harm (Al-Marafiq wa Man’ al-Darar), the appropriation of space (Haqq el-Intefa’) and public interest (maslaha a’ma). Our findings open the discussion for the (il) legitimate of a’hali governing methods in contemporary cities.Keywords: Urf, urban governance, public space, public housing, encroachments, chaotic, Egyptian cities
Procedia PDF Downloads 135581 A Global Perspective on Neuropsychology: The Multicultural Neuropsychological Scale
Authors: Tünde Tifordiána Simonyi, Tímea Harmath-Tánczos
Abstract:
The primary aim of the current research is to present the significance of a multicultural perspective in clinical neuropsychology and to present the test battery of the Multicultural Neuropsychological Scale (MUNS). The method includes the MUNS screening tool that involves stimuli common to most cultures in the world. The test battery measures general cognitive functioning focusing on five cognitive domains (memory, executive function, language, visual construction, and attention) tested with seven subtests that can be utilized within a wide age range (15-89), and lower and higher education participants. It is a scale that is sensitive to mild cognitive impairments. Our study presents the first results with the Hungarian translation of MUNS on a healthy sample. The education range was 4-25 years of schooling. The Hungarian sample was recruited by snowball sampling. Within the investigated population (N=151) the age curve follows an inverted U-shaped curve regarding cognitive performance with a high load on memory. Age, reading fluency, and years of education significantly influenced test scores. The sample was tested twice within a 14-49 days interval to determine test-retest reliability, which is satisfactory. Besides the findings of the study and the introduction of the test battery, the article also highlights its potential benefits for both research and clinical neuropsychological practice. The importance of adapting, validating and standardizing the test in other languages besides the Hungarian language context is also stressed. This test battery could serve as a helpful tool in mapping general cognitive functions in psychiatric and neurological disorders regardless of the cultural background of the patients.Keywords: general cognitive functioning, multicultural, MUNS, neuropsychological test battery
Procedia PDF Downloads 112580 Auto Calibration and Optimization of Large-Scale Water Resources Systems
Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari
Abstract:
Water resource systems modelling have constantly been a challenge through history for human being. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.Keywords: auto-calibration, Gilan, large-scale water resources, simulation
Procedia PDF Downloads 335579 A Literature Review on the Effect of Financial Knowledge toward Corporate Growth: The Important Role of Financial Risk Attitude
Authors: Risna Wijayanti, Sumiati, Hanif Iswari
Abstract:
This study aims to analyze the role of financial risk attitude as a mediation between financial knowledge and business growth. The ability of human resources in managing capital (financial literacy) can be a major milestone for a company's business to grow and build its competitive advantage. This study analyzed the important role of financial risk attitude in bringing about financial knowledge on corporate growth. There have been many discussions arguing that financial knowledge is one of the main abilities of corporate managers in determining the success of managing a company. However, a contrary argument of other scholars also enlightened that financial knowledge did not have a significant influence on corporate growth. This study used literatures' review to analyze whether there is another variable that can mediate the effect of financial knowledge toward corporate growth. Research mapping was conducted to analyze the concept of risk tolerance. This concept was related to people's risk aversion effects when making a decision under risk and the role of financial knowledge on changes in financial income. Understanding and managing risks and investments are complicated, in particular for corporate managers, who are always demanded to maintain their corporate growth. Substantial financial knowledge is extremely needed to identify and take accurate information for corporate financial decision-making. By reviewing several literature, this study hypothesized that financial knowledge of corporate managers would be meaningless without manager's courage to bear risks for taking favorable business opportunities. Therefore, the level of risk aversion from corporate managers will determine corporate action, which is a reflection of corporate-level investment behavior leading to attain corporate success or failure for achieving the company's expected growth rate.Keywords: financial knowledge, financial risk attitude, corporate growth, risk tolerance
Procedia PDF Downloads 130578 Analysis and Mapping of Climate and Spring Yield in Tanahun District, Nepal
Authors: Resham Lal Phuldel
Abstract:
This study based on a bilateral development cooperation project funded by the governments of Nepal and Finland. The first phase of the project has been completed in August 2012 and the phase II started in September 2013 and will end September 2018. The project strengthens the capacity of local governments in 14 districts to deliver services in water supply, sanitation and hygiene in Western development region and in Mid-Western development region of Nepal. In recent days, several spring sources have been dried out or slowly decreasing its yield across the country due to changing character of rainfall, increasing evaporative losses and some other manmade causes such as land use change, infrastructure development work etc. To sustain the hilly communities, the sources have to be able to provide sufficient water to serve the population, either on its own or in conjunction with other sources. Phase III have measured all water sources in Tanahu district in 2004 and sources were located with the GPS. Phase II has repeated the exercise to see changes in the district. 3320 water sources as identified in 2004 and altogether 4223 including new water sources were identified and measured in 2014. Between 2004 and 2014, 50% flow rate (yield) deduction of point sources’ average yield in 10 years is found. Similarly, 21.6% and 34% deductions of average yield were found in spring and stream water sources respectively. The rainfall from 2002 to 2013 shows erratic rainfalls in the district. The monsoon peak month is not consistent and the trend shows the decrease of annual rainfall 16.7 mm/year. Further, the temperature trend between 2002 and 2013 shows warming of + 0.0410C/year.Keywords: climate change, rainfall, source discharge, water sources
Procedia PDF Downloads 284577 The Comparison between Modelled and Measured Nitrogen Dioxide Concentrations in Cold and Warm Seasons in Kaunas
Authors: A. Miškinytė, A. Dėdelė
Abstract:
Road traffic is one of the main sources of air pollution in urban areas associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered as traffic-related air pollutant, which concentrations tend to be higher near highways, along busy roads and in city centres and exceedances are mainly observed in air quality monitoring stations located close to traffic. Atmospheric dispersion models can be used to examine emissions from many various sources and to predict the concentration of pollutants emitted from these sources into the atmosphere. The study aim was to compare modelled concentrations of nitrogen dioxide using ADMS-Urban dispersion model with air quality monitoring network in cold and warm seasons in Kaunas city. Modelled average seasonal concentrations of nitrogen dioxide for 2011 year have been verified with automatic air quality monitoring data from two stations in the city. Traffic station is located near high traffic street in industrial district and background station far away from the main sources of nitrogen dioxide pollution. The modelling results showed that the highest nitrogen dioxide concentration was modelled and measured in station located near intensive traffic street, both in cold and warm seasons. Modelled and measured nitrogen dioxide concentration was respectively 25.7 and 25.2 µg/m3 in cold season and 15.5 and 17.7 µg/m3 in warm season. While the lowest modelled and measured NO2 concentration was determined in background monitoring station, respectively 12.2 and 13.3 µg/m3 in cold season and 6.1 and 7.6 µg/m3 in warm season. The difference between monitoring station located near high traffic street and background monitoring station showed that better agreement between modelled and measured NO2 concentration was observed at traffic monitoring station.Keywords: air pollution, nitrogen dioxide, modelling, ADMS-Urban model
Procedia PDF Downloads 409576 Long Term Examination of the Profitability Estimation Focused on Benefits
Authors: Stephan Printz, Kristina Lahl, René Vossen, Sabina Jeschke
Abstract:
Strategic investment decisions are characterized by high innovation potential and long-term effects on the competitiveness of enterprises. Due to the uncertainty and risks involved in this complex decision making process, the need arises for well-structured support activities. A method that considers cost and the long-term added value is the cost-benefit effectiveness estimation. One of those methods is the “profitability estimation focused on benefits – PEFB”-method developed at the Institute of Management Cybernetics at RWTH Aachen University. The method copes with the challenges associated with strategic investment decisions by integrating long-term non-monetary aspects whilst also mapping the chronological sequence of an investment within the organization’s target system. Thus, this method is characterized as a holistic approach for the evaluation of costs and benefits of an investment. This participation-oriented method was applied to business environments in many workshops. The results of the workshops are a library of more than 96 cost aspects, as well as 122 benefit aspects. These aspects are preprocessed and comparatively analyzed with regards to their alignment to a series of risk levels. For the first time, an accumulation and a distribution of cost and benefit aspects regarding their impact and probability of occurrence are given. The results give evidence that the PEFB-method combines precise measures of financial accounting with the incorporation of benefits. Finally, the results constitute the basics for using information technology and data science for decision support when applying within the PEFB-method.Keywords: cost-benefit analysis, multi-criteria decision, profitability estimation focused on benefits, risk and uncertainty analysis
Procedia PDF Downloads 445575 Determination of Optimum Parameters for Thermal Stress Distribution in Composite Plate Containing a Triangular Cutout by Optimization Method
Authors: Mohammad Hossein Bayati Chaleshtari, Hadi Khoramishad
Abstract:
Minimizing the stress concentration around triangular cutout in infinite perforated plates subjected to a uniform heat flux induces thermal stresses is an important consideration in engineering design. Furthermore, understanding the effective parameters on stress concentration and proper selection of these parameters enables the designer to achieve a reliable design. In the analysis of thermal stress, the effective parameters on stress distribution around cutout include fiber angle, flux angle, bluntness and rotation angle of the cutout for orthotropic materials. This paper was tried to examine effect of these parameters on thermal stress analysis of infinite perforated plates with central triangular cutout. In order to achieve the least amount of thermal stress around a triangular cutout using a novel swarm intelligence optimization technique called dragonfly optimizer that inspired by the life method and hunting behavior of dragonfly in nature. In this study, using the two-dimensional thermoelastic theory and based on the Likhnitskiiʼ complex variable technique, the stress analysis of orthotropic infinite plate with a circular cutout under a uniform heat flux was developed to the plate containing a quasi-triangular cutout in thermal steady state condition. To achieve this goal, a conformal mapping function was used to map an infinite plate containing a quasi- triangular cutout into the outside of a unit circle. The plate is under uniform heat flux at infinity and Neumann boundary conditions and thermal-insulated condition at the edge of the cutout were considered.Keywords: infinite perforated plate, complex variable method, thermal stress, optimization method
Procedia PDF Downloads 150574 Analysis of Underground Logistics Transportation Technology and Planning Research: Based on Xiong'an New Area, China
Authors: Xia Luo, Cheng Zeng
Abstract:
Under the promotion of the Central Committee of the Communist Party of China and the State Council in 2017, Xiong'an New Area is the third crucial new area in China established after Shenzhen and Shanghai. Its constructions' significance lies in mitigating Beijing's non-capital functions and exploring a new mode of optimizing development in densely populated and economically intensive areas. For this purpose, developing underground logistics can assume the role of goods distribution in the capital, relieve the road transport pressure in Beijing-Tianjin-Hebei Urban Agglomeration, adjust and optimize the urban layout and spatial structure of it. Firstly, the construction planning of Xiong'an New Area and underground logistics development are summarized, especially the development status abroad, the development trend, and bottlenecks of underground logistics in China. This paper explores the technicality, feasibility, and necessity of four modes of transportation. There are pneumatic capsule pipeline (PCP) technology, the CargoCap technology, cable hauled mule, and automatic guided vehicle (AGV). The above technical parameters and characteristics are introduced to relevant experts or scholars. Through establishing an indicator system, carrying out a questionnaire survey with the Delphi method, the final suggestion is obtained: China should develop logistics vehicles similar to CargoCap, adopting rail mode and driverless mode. Based on China's temporal and spatial logistics demand and the geographical pattern of Xiong'an New Area, the construction scale, technical parameters, node location, and other vital parameters of underground logistics are planned. In this way, we hope to speed up the new area's construction and the logistics industry's innovation.Keywords: the Xiong'an new area, underground logistics, contrastive analysis, CargoCap, logistics planning
Procedia PDF Downloads 129573 The Effects of “Never Pressure Injury” on the Incidence of Pressure Injuries in Critically Ill Patients
Authors: Nuchjaree Kidjawan, Orapan Thosingha, Pawinee Vaipatama, Prakrankiat Youngkong, Sirinapha Malangputhong, Kitti Thamrongaphichartkul, Phatcharaporn Phetcharat
Abstract:
NPI uses technology sensorization of things and processed by AI system. The main features are an individual interface pressure sensor system in contact with the mattress and a position management system where the sensor detects the determined pressure with automatic pressure reduction and distribution. The role of NPI is to monitor, identify the risk and manage the interface pressure automatically when the determined pressure is detected. This study aims to evaluate the effects of “Never Pressure Injury (NPI),” an innovative mattress, on the incidence of pressure injuries in critically ill patients. An observational case-control study was employed to compare the incidence of pressure injury between the case and the control group. The control group comprised 80 critically ill patients admitted to a critical care unit of Phyathai3 Hospital, receiving standard care with the use of memory foam according to intensive care unit guidelines. The case group comprised 80 critically ill patients receiving standard care and with the use of the Never Pressure Injury (NPI) innovation mattress. The patients who were over 20 years old and showed scores of less than 18 on the Risk Assessment Pressure Ulcer Scale – ICU and stayed in ICU for more than 24 hours were selected for the study. The patients’ skin was assessed for the occurrence of pressure injury once a day for five consecutive days or until the patients were discharged from ICU. The sample comprised 160 patients with ages ranging from 30-102 (mean = 70.1 years), and the Body Mass Index ranged from 13.69- 49.01 (mean = 24.63). The case and the control group were not different in their sex, age, Body Mass Index, Pressure Ulcer Risk Scores, and length of ICU stay. Twenty-two patients (27.5%) in the control group had pressure injuries, while no pressure injury was found in the case group.Keywords: pressure injury, never pressure injury, innovation mattress, critically ill patients, prevent pressure injury
Procedia PDF Downloads 126572 TutorBot+: Automatic Programming Assistant with Positive Feedback based on LLMs
Authors: Claudia Martínez-Araneda, Mariella Gutiérrez, Pedro Gómez, Diego Maldonado, Alejandra Segura, Christian Vidal-Castro
Abstract:
The purpose of this document is to showcase the preliminary work in developing an EduChatbot-type tool and measuring the effects of its use aimed at providing effective feedback to students in programming courses. This bot, hereinafter referred to as tutorBot+, was constructed based on chatGPT and is tasked with assisting and delivering timely positive feedback to students in the field of computer science at the Universidad Católica de Concepción. The proposed working method consists of four stages: (1) Immersion in the domain of Large Language Models (LLMs), (2) Development of the tutorBot+ prototype and integration, (3) Experiment design, and (4) Intervention. The first stage involves a literature review on the use of artificial intelligence in education and the evaluation of intelligent tutors, as well as research on types of feedback for learning and the domain of chatGPT. The second stage encompasses the development of tutorBot+, and the final stage involves a quasi-experimental study with students from the Programming and Database labs, where the learning outcome involves the development of computational thinking skills, enabling the use and measurement of the tool's effects. The preliminary results of this work are promising, as a functional chatBot prototype has been developed in both conversational and non-conversational versions integrated into an open-source online judge and programming contest platform system. There is also an exploration of the possibility of generating a custom model based on a pre-trained one tailored to the domain of programming. This includes the integration of the created tool and the design of the experiment to measure its utility.Keywords: assessment, chatGPT, learning strategies, LLMs, timely feedback
Procedia PDF Downloads 69571 A Radiomics Approach to Predict the Evolution of Prostate Imaging Reporting and Data System Score 3/5 Prostate Areas in Multiparametric Magnetic Resonance
Authors: Natascha C. D'Amico, Enzo Grossi, Giovanni Valbusa, Ala Malasevschi, Gianpiero Cardone, Sergio Papa
Abstract:
Purpose: To characterize, through a radiomic approach, the nature of areas classified PI-RADS (Prostate Imaging Reporting and Data System) 3/5, recognized in multiparametric prostate magnetic resonance with T2-weighted (T2w), diffusion and perfusion sequences with paramagnetic contrast. Methods and Materials: 24 cases undergoing multiparametric prostate MR and biopsy were admitted to this pilot study. Clinical outcome of the PI-RADS 3/5 was found through biopsy, finding 8 malignant tumours. The analysed images were acquired with a Philips achieva 1.5T machine with a CE- T2-weighted sequence in the axial plane. Semi-automatic tumour segmentation was carried out on MR images using 3DSlicer image analysis software. 45 shape-based, intensity-based and texture-based features were extracted and represented the input for preprocessing. An evolutionary algorithm (a TWIST system based on KNN algorithm) was used to subdivide the dataset into training and testing set and select features yielding the maximal amount of information. After this pre-processing 20 input variables were selected and different machine learning systems were used to develop a predictive model based on a training testing crossover procedure. Results: The best machine learning system (three-layers feed-forward neural network) obtained a global accuracy of 90% ( 80 % sensitivity and 100% specificity ) with a ROC of 0.82. Conclusion: Machine learning systems coupled with radiomics show a promising potential in distinguishing benign from malign tumours in PI-RADS 3/5 areas.Keywords: machine learning, MR prostate, PI-Rads 3, radiomics
Procedia PDF Downloads 188570 Evaluation of Railway Network and Service Performance Based on Transportation Sustainability in DKI Jakarta
Authors: Nur Bella Octoria Bella, Ayomi Dita Rarasati
Abstract:
DKI Jakarta is Indonesia's capital city with the 10th highest congestion rate in the world based on the 2019 traffic index. Other than that based on World Air Quality Report in 2019 showed DKI Jakarta's air pollutant concentrate 49.4 µg and the 5th highest air pollutant in the world. In the urban city nowadays, the mobility rate is high enough and the efficiency for sustainability assessment in transport infrastructure development is needed. This efficiency is the important key for sustainable infrastructure development. DKI Jakarta is nowadays in the process of constructing the railway infrastructure to support the transportation system. The problems appearing are the railway infrastructure networks and the service in DKI Jakarta already planned based on sustainability factors or not. Therefore, the aim of this research is to make the evaluation of railways infrastructure networks performance and services in DKI Jakarta regards on the railway sustainability key factors. Further, this evaluation will be used to make the railway sustainability assessment framework and to offer some of the alternative solutions to improve railway transportation sustainability in DKI Jakarta. Firstly a very detailed literature review of papers that have focused on railway sustainability factors and their improvements of railway sustainability, published in the scientific journal in the period 2011 until 2021. Regarding the sustainability factors from the literature review, further, it is used to assess the current condition of railway infrastructure in DKI Jakarta. The evaluation will be using a Likert rate questionnaire and directed to the transportation railway expert and the passenger. Furthermore, the mapping and evaluation rate based on the sustainability factors will be compared to the effect factors using the Analytical Hierarchical Process (AHP). This research offers the network's performance and service rate impact on the sustainability aspect and the passenger willingness for using the rail public transportation in DKI Jakarta.Keywords: transportation sustainability, railway transportation, sustainability, DKI Jakarta
Procedia PDF Downloads 165569 Using Visualization Techniques to Support Common Clinical Tasks in Clinical Documentation
Authors: Jonah Kenei, Elisha Opiyo
Abstract:
Electronic health records, as a repository of patient information, is nowadays the most commonly used technology to record, store and review patient clinical records and perform other clinical tasks. However, the accurate identification and retrieval of relevant information from clinical records is a difficult task due to the unstructured nature of clinical documents, characterized in particular by a lack of clear structure. Therefore, medical practice is facing a challenge thanks to the rapid growth of health information in electronic health records (EHRs), mostly in narrative text form. As a result, it's becoming important to effectively manage the growing amount of data for a single patient. As a result, there is currently a requirement to visualize electronic health records (EHRs) in a way that aids physicians in clinical tasks and medical decision-making. Leveraging text visualization techniques to unstructured clinical narrative texts is a new area of research that aims to provide better information extraction and retrieval to support clinical decision support in scenarios where data generated continues to grow. Clinical datasets in electronic health records (EHR) offer a lot of potential for training accurate statistical models to classify facets of information which can then be used to improve patient care and outcomes. However, in many clinical note datasets, the unstructured nature of clinical texts is a common problem. This paper examines the very issue of getting raw clinical texts and mapping them into meaningful structures that can support healthcare professionals utilizing narrative texts. Our work is the result of a collaborative design process that was aided by empirical data collected through formal usability testing.Keywords: classification, electronic health records, narrative texts, visualization
Procedia PDF Downloads 118568 Uniqueness of Fingerprint Biometrics to Human Dynasty: A Review
Authors: Siddharatha Sharma
Abstract:
With the advent of technology and machines, the role of biometrics in society is taking an important place for secured living. Security issues are the major concern in today’s world and continue to grow in intensity and complexity. Biometrics based recognition, which involves precise measurement of the characteristics of living beings, is not a new method. Fingerprints are being used for several years by law enforcement and forensic agencies to identify the culprits and apprehend them. Biometrics is based on four basic principles i.e. (i) uniqueness, (ii) accuracy, (iii) permanency and (iv) peculiarity. In today’s world fingerprints are the most popular and unique biometrics method claiming a social benefit in the government sponsored programs. A remarkable example of the same is UIDAI (Unique Identification Authority of India) in India. In case of fingerprint biometrics the matching accuracy is very high. It has been observed empirically that even the identical twins also do not have similar prints. With the passage of time there has been an immense progress in the techniques of sensing computational speed, operating environment and the storage capabilities and it has become more user convenient. Only a small fraction of the population may be unsuitable for automatic identification because of genetic factors, aging, environmental or occupational reasons for example workers who have cuts and bruises on their hands which keep fingerprints changing. Fingerprints are limited to human beings only because of the presence of volar skin with corrugated ridges which are unique to this species. Fingerprint biometrics has proved to be a high level authentication system for identification of the human beings. Though it has limitations, for example it may be inefficient and ineffective if ridges of finger(s) or palm are moist authentication becomes difficult. This paper would focus on uniqueness of fingerprints to the human beings in comparison to other living beings and review the advancement in emerging technologies and their limitations.Keywords: fingerprinting, biometrics, human beings, authentication
Procedia PDF Downloads 325567 The Design Method of Artificial Intelligence Learning Picture: A Case Study of DCAI's New Teaching
Authors: Weichen Chang
Abstract:
To create a guided teaching method for AI generative drawing design, this paper develops a set of teaching models for AI generative drawing (DCAI), which combines learning modes such as problem-solving, thematic inquiry, phenomenon-based, task-oriented, and DFC . Through the information security AI picture book learning guided programs and content, the application of participatory action research (PAR) and interview methods to explore the dual knowledge of Context and ChatGPT (DCAI) for AI to guide the development of students' AI learning skills. In the interviews, the students highlighted five main learning outcomes (self-study, critical thinking, knowledge generation, cognitive development, and presentation of work) as well as the challenges of implementing the model. Through the use of DCAI, students will enhance their consensus awareness of generative mapping analysis and group cooperation, and they will have knowledge that can enhance AI capabilities in DCAI inquiry and future life. From this paper, it is found that the conclusions are (1) The good use of DCAI can assist students in exploring the value of their knowledge through the power of stories and finding the meaning of knowledge communication; (2) Analyze the transformation power of the integrity and coherence of the story through the context so as to achieve the tension of ‘starting and ending’; (3) Use ChatGPT to extract inspiration, arrange story compositions, and make prompts that can communicate with people and convey emotions. Therefore, new knowledge construction methods will be one of the effective methods for AI learning in the face of artificial intelligence, providing new thinking and new expressions for interdisciplinary design and design education practice.Keywords: artificial intelligence, task-oriented, contextualization, design education
Procedia PDF Downloads 34566 EcoMush: Mapping Sustainable Mushroom Production in Bangladesh
Authors: A. A. Sadia, A. Emdad, E. Hossain
Abstract:
The increasing importance of mushrooms as a source of nutrition, health benefits, and even potential cancer treatment has raised awareness of the impact of climate-sensitive variables on their cultivation. Factors like temperature, relative humidity, air quality, and substrate composition play pivotal roles in shaping mushroom growth, especially in Bangladesh. Oyster mushrooms, a commonly cultivated variety in this region, are particularly vulnerable to climate fluctuations. This research explores the climatic dynamics affecting oyster mushroom cultivation and, presents an approach to address these challenges and provides tangible solutions to fortify the agro-economy, ensure food security, and promote the sustainability of this crucial food source. Using climate and production data, this study evaluates the performance of three clustering algorithms -KMeans, OPTICS, and BIRCH- based on various quality metrics. While each algorithm demonstrates specific strengths, the findings provide insights into their effectiveness for this specific dataset. The results yield essential information, pinpointing the optimal temperature range of 13°C-22°C, the unfavorable temperature threshold of 28°C and above, and the ideal relative humidity range of 75-85% with the suitable production regions in three different seasons: Kharif-1, 2, and Robi. Additionally, a user-friendly web application is developed to support mushroom farmers in making well-informed decisions about their cultivation practices. This platform offers valuable insights into the most advantageous periods for oyster mushroom farming, with the overarching goal of enhancing the efficiency and profitability of mushroom farming.Keywords: climate variability, mushroom cultivation, clustering techniques, food security, sustainability, web-application
Procedia PDF Downloads 71565 Blood Analysis of Diarrheal Calves Using Portable Blood Analyzer: Analysis of Calves by Age
Authors: Kwangman Park, Jinhee Kang, Suhee Kim, Dohyeon Yu, Kyoungseong Choi, Jinho Park
Abstract:
Statement of the Problem: Diarrhea is a major cause of death in young calves. This causes great economic damage to the livestock industry. These diarrhea cause dehydration, decrease blood flow, lower the pH and degrade enzyme function. In the past, serum screening was not possible in the field. However, now with the spread of portable serum testing devices, it is now possible to conduct tests directly on field. Thus, accurate serological changes can be identified and used in the field of large animals. Methodology and Theoretical Orientation: The test groups were calves from 1 to 44 days old. The status of the feces was divided into four grade to determine the severity of diarrhea (grade 0,1,2,3). Grade 0, 1 is considered to have no diarrhea. Grade 2, 3 is considered to diarrhea positive group. One or more viruses were detected in this group. Diarrhea negasitive group consisted of 57 calves (Asan=30, Samrye=27). Diarrhea positive group consisted of 34 calves (Kimje=27, Geochang=7). The feces of all calves were analyzed by PCR Test. Blood sample was measured using an automatic blood analyzer(i-STAT, Abbott inc. Illinois, US). Calves were divided into 3 groups according to age. Group 1 is 1 to 14 days old. Group 2 is 15 to 28 days old. Group 3 is more than 28 days old. Findings: Diarrhea caused an increase in HCT due to dehydration. The difference from normal was highest in 15 to 28 days old (p < 0.01). At all ages, bicarbonate decreased compared to normal, and therefore pH decreased. Similar to HCT, the largest difference was observed between 15 and 28 days (p < 0.01). The pCO₂ decreases to compensate for the decrease in pH. Conclusion and Significance: At all ages, HCT increases, and bicarbonate, pH, and pCO₂ decrease in diarrhea calves. The calf from 15 days to 28 days shows the most difference from normal. Over 28 days of age, weight gain and homeostasis ability increase, diarrhea is seen in the stool, there are fewer hematologic changes than groups below 28 days of age.Keywords: calves, diarrhea, hematological changes, i-STAT
Procedia PDF Downloads 161564 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines
Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso
Abstract:
The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.Keywords: feature extraction, machine learning, OBIA, remote sensing
Procedia PDF Downloads 363563 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: linked open data, information integration, digital libraries, data mining
Procedia PDF Downloads 428